• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: capistrano deployments w/ wo
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: capistrano deployments w/ wo


  • Subject: Re: capistrano deployments w/ wo
  • From: Lachlan Deck <email@hidden>
  • Date: Wed, 19 Nov 2008 22:29:48 +1100

Hi Michael,

On 19/11/2008, at 6:06 PM, Michael Bushkov wrote:

On Tue, Nov 18, 2008 at 11:08 PM, Lachlan Deck <email@hidden> wrote:
<...>
Yes, maybe I simplified things too much. I'll add backup creation and
rollback to this example.

Great.

So here's some ideas for bonus points .. at least for the advanced guide.
It'll be good to see (if not in your immediate plans) versioning used when
deploying which includes auto-injecting into JavaMonitor (or similar) the
new deployments without pulling down the old ones. So there'll need to be
*.cap tasks for
- putting up a new version of an app (which doesn't overwrite an old one)
- starting up the new instance(s)
- setting the old instances to refuse new sessions
- removing apps
- if rsync is used you can upload to current app if only fixing a resource
for example.
- dealing with split installs (keeping versioning in mind)


:-)

Wow, that sounds impressive ) Do you use this model of deployment?

Currently we're pulling builds from bamboo. So after each svn commit bamboo runs the build (maven in my case, which produces a tar.gz for both the app and webserver resources). We've then got shell scripts that when manually called pull a specific build from bamboo, rsync/ unpack them to a certain location (whether for test or deployment environment).


In JavaMonitor for each app we define multiple instances per server pointing to each of 'a', 'b' (and sometimes 'c') folders for an app. So the folder structure is like so:
/<...>/javaMonitorAppName/a/ProjectName.woa/
/<...>/javaMonitorAppName/b/ProjectName.woa/
/<...>/javaMonitorAppName/c/ProjectName.woa/
/<...>/javaMonitorAppName/Properties/log4j.properties
/<...>/javaMonitorAppName/Properties/jdbc.properties
/<...>/javaMonitorAppName/Properties/runtime.properties


This way if we need to fire up new instances of the same version we can whilst killing of the old ones (e.g., if out of mem is hit).

The webserver resources are placed in a similar structure in the relevant location (which is defined per instance in JM). For actual static resources (that are simply under apache's control) we've not yet versioned these and our versioned split install needs some improving with css files that have some hard-coding included which refers to the version (as needed). This can be easily solved during the deployment phase by regex-replacing certain tokens.

So we round robin to a, b, and c folders. e.g., if 'a' is currently live then the new build goes to 'b'. We opted for this approach as it saves having to maintain (i.e., add/remove) specific versions in javamonitor - which would be just a hassle to do by hand.

These new 'b' instances, for example, are fired up (on each server) and once up then refuse new sessions is put on the old 'a' instances allowing them to die by themselves. If something goes wrong with the new version we can roll back to the previous ones. We don't delete from the server - just overwrite via rsync when it's that instance's turn for an update (according to the very technical whiteboard :). It's a process we're refining as time goes on but (in theory) it means no downtime.

Actually we have a bit different approach:
* We upload new version of the app to the server to the [special
folder]/[app name]/[revision number] path.
* Then we make a soft link from there to
/Library/WebObjects/Applications/[app name]. After that the previous
deployed version still exists - but not in
/Library/WebObjects/Applications

Ok

* After that we send restart command to monitor and the app restarts.

Via mouse or otherwise?

It results in small downtime, but If the downtime is not acceptable,
we restart app manually (instance by instance) in JavaMonitor.

This is quite a simple approach, it doesn't require a lot of
integration with JavaMonitor or wotaskd and works quite well for us
right now. The plan, that you're proposing sounds good too (much more
complicated, though ;) ) - I guess I can write Capistrano recipes for
missing parts. By the way, just interesting, do you use test/build
server or do you deploy straight from your development machine?

Of more recent months, as I mentioned, we're pulling straight from bamboo rather than rsync'ing up from my machine. This means only what's committed makes it up, it's a reproducible environment - or less experimental perhaps, and removes the dependency on myself and my laptop being available.


with regards,
--

Lachlan Deck

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: capistrano deployments w/ wo
      • From: "Denis Frolov" <email@hidden>
    • Re: capistrano deployments w/ wo
      • From: "Michael Bushkov" <email@hidden>
References: 
 >capistrano deployments w/ wo (From: Mike Schrag <email@hidden>)
 >Re: capistrano deployments w/ wo (From: "Michael Bushkov" <email@hidden>)
 >Re: capistrano deployments w/ wo (From: Mike Schrag <email@hidden>)
 >Re: capistrano deployments w/ wo (From: "Michael Bushkov" <email@hidden>)
 >Re: capistrano deployments w/ wo (From: Lachlan Deck <email@hidden>)
 >Re: capistrano deployments w/ wo (From: "Michael Bushkov" <email@hidden>)

  • Prev by Date: Re: default superclass package for eogen files
  • Next by Date: Re: Who is tbody and why won't he leave me alone?
  • Previous by thread: Re: capistrano deployments w/ wo
  • Next by thread: Re: capistrano deployments w/ wo
  • Index(es):
    • Date
    • Thread