Re: capistrano deployments w/ wo
Re: capistrano deployments w/ wo
- Subject: Re: capistrano deployments w/ wo
- From: "Michael Bushkov" <email@hidden>
- Date: Wed, 19 Nov 2008 18:51:44 +0300
Hi Lachlan,
On Wed, Nov 19, 2008 at 2:29 PM, Lachlan Deck <email@hidden> wrote:
> Hi Michael,
>
> On 19/11/2008, at 6:06 PM, Michael Bushkov wrote:
>
>> On Tue, Nov 18, 2008 at 11:08 PM, Lachlan Deck <email@hidden>
>> wrote:
>>>
>>> <...>
>>
>> Yes, maybe I simplified things too much. I'll add backup creation and
>> rollback to this example.
>
> Great.
>
>>> So here's some ideas for bonus points .. at least for the advanced guide.
>>> It'll be good to see (if not in your immediate plans) versioning used
>>> when
>>> deploying which includes auto-injecting into JavaMonitor (or similar) the
>>> new deployments without pulling down the old ones. So there'll need to be
>>> *.cap tasks for
>>> - putting up a new version of an app (which doesn't overwrite an old one)
>>> - starting up the new instance(s)
>>> - setting the old instances to refuse new sessions
>>> - removing apps
>>> - if rsync is used you can upload to current app if only fixing a
>>> resource
>>> for example.
>>> - dealing with split installs (keeping versioning in mind)
>>>
>>> :-)
>>
>> Wow, that sounds impressive ) Do you use this model of deployment?
>
> Currently we're pulling builds from bamboo. So after each svn commit bamboo
> runs the build (maven in my case, which produces a tar.gz for both the app
> and webserver resources). We've then got shell scripts that when manually
> called pull a specific build from bamboo, rsync/unpack them to a certain
> location (whether for test or deployment environment).
>
> In JavaMonitor for each app we define multiple instances per server pointing
> to each of 'a', 'b' (and sometimes 'c') folders for an app. So the folder
> structure is like so:
> /<...>/javaMonitorAppName/a/ProjectName.woa/
> /<...>/javaMonitorAppName/b/ProjectName.woa/
> /<...>/javaMonitorAppName/c/ProjectName.woa/
> /<...>/javaMonitorAppName/Properties/log4j.properties
> /<...>/javaMonitorAppName/Properties/jdbc.properties
> /<...>/javaMonitorAppName/Properties/runtime.properties
>
> This way if we need to fire up new instances of the same version we can
> whilst killing of the old ones (e.g., if out of mem is hit).
>
> The webserver resources are placed in a similar structure in the relevant
> location (which is defined per instance in JM). For actual static resources
> (that are simply under apache's control) we've not yet versioned these and
> our versioned split install needs some improving with css files that have
> some hard-coding included which refers to the version (as needed). This can
> be easily solved during the deployment phase by regex-replacing certain
> tokens.
>
> So we round robin to a, b, and c folders. e.g., if 'a' is currently live
> then the new build goes to 'b'. We opted for this approach as it saves
> having to maintain (i.e., add/remove) specific versions in javamonitor -
> which would be just a hassle to do by hand.
>
> These new 'b' instances, for example, are fired up (on each server) and once
> up then refuse new sessions is put on the old 'a' instances allowing them to
> die by themselves. If something goes wrong with the new version we can roll
> back to the previous ones. We don't delete from the server - just overwrite
> via rsync when it's that instance's turn for an update (according to the
> very technical whiteboard :). It's a process we're refining as time goes on
> but (in theory) it means no downtime.
Thanks for the information!
>
>> Actually we have a bit different approach:
>> * We upload new version of the app to the server to the [special
>> folder]/[app name]/[revision number] path.
>> * Then we make a soft link from there to
>> /Library/WebObjects/Applications/[app name]. After that the previous
>> deployed version still exists - but not in
>> /Library/WebObjects/Applications
>
> Ok
>
>> * After that we send restart command to monitor and the app restarts.
>
> Via mouse or otherwise?
We use telnet to query wotaskd directly. This all works as a
capistrano task that is able to restart any application on any server.
>
>> It results in small downtime, but If the downtime is not acceptable,
>> we restart app manually (instance by instance) in JavaMonitor.
>>
>> This is quite a simple approach, it doesn't require a lot of
>> integration with JavaMonitor or wotaskd and works quite well for us
>> right now. The plan, that you're proposing sounds good too (much more
>> complicated, though ;) ) - I guess I can write Capistrano recipes for
>> missing parts. By the way, just interesting, do you use test/build
>> server or do you deploy straight from your development machine?
>
> Of more recent months, as I mentioned, we're pulling straight from bamboo
> rather than rsync'ing up from my machine. This means only what's committed
> makes it up, it's a reproducible environment - or less experimental perhaps,
> and removes the dependency on myself and my laptop being available.
We have a rule that "only the project that is deployed (and tested) on
the test server can go to production server". Deployment to the test
server happens after each commit. So we have a live test environment
there. If we're sure that particular application is working correctly
we copy it to production. It's also quite convenient to use revision
numbers as deployments' identifiers - you can always see what revision
is currently installed on the production server.
>
> with regards,
> --
>
> Lachlan Deck
>
>
--
With best regards,
Michael Bushkov
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden