With LINA, a single executable written and compiled for Linux can be run with native look and feel on Windows, Mac OS X, and UNIX operating systems.
Check the demo video - its a bit geeky but you get a feel for how it works.
For software development its all pretty good - the client gets what they want faster as usable code released more frequently takes precendence over traditional the Waterfall style model.
There is a spleen-worthy catch (or two) however -
If you're putting in point solutions or you already have a well established framework within which to fit your Agile-goodness then you're all set. If you haven't got the Architechture nailed (and I don't mean diagrams with lines connecting things up implying it'll all automagically fall into place) then you're going to be winging it.
At an operational level you'll have a bunch of systems and technologies going in with some questions about how it all hangs together - this kind of bottom up thinking will inevitably lead to a requirement to review whats just gone on and how it could be improved (which your integration partner will gladly charge you for when it should have all been planned out before anything went into production).
Ideally your Software Architect, Infrastructure Architect and Integration Partner would all sit around a table and plan how it'll all fit together before a single server is purchased. Throughout the process you need to involve the business itself in the process so that you actually build and deliver something that they'll actually use.
At an architecture level they need to determine what technologies will be used, the application framework will deliver, how it will scale, how it will move from dev to uat to prod, how easily other apps can be added into the framework, what training and resources are required, will applications be delivered externally, how will they be authenticated, if you have a CRM can the information be fed back into collaborative workspaces for the client or will there be islands of client metadata, how will these applications be managed and supported, will physical or virtual servers be used, what security will be in place, how will backup, recovery and dr occur, will systems be clustered or load-balanced etc etc.
Once all the pretty diagrams are in place they need to get to the nitty-gritty of how it will work in operation - what hardware to buy, what software, what network infrastructure, how the dev/uat/prod environments interact etc etc
I reckon Agile & Infrastructure are two things that just don't go together - you can't make infrastructure up on the fly if you want anything more than basic services to support point-solutions. Infrastructure needs to be planned and documented to support whatever you want to build on top of it - once its in place then Project Managers, Analysts and Developers can be as Agile as they like.
Probably the single biggest factor (IMHO) in enabling an agile infrastructure would have to be Virtualisation. No more worrying about when and where hardware is going to come from and who will pay for it with the ability to provision new boxes in about 15 minutes flat. If it looks like you're heading down the Agile route convince the powers that be to invest and believe in a virtualised infrastructure.
It all seems pretty obvious that this stuff needs to be thought about but as an operations person if you start asking these questions you run the risk of not being 'Agile' and being perceived as the negative aspect of the development plan ('we can't deliver because the Systems team won't give us servers' or 'they won't give our integrator access to extend the Active Directory schema'). Of course Project Managers should ensure the 'big picture' is part of their plan but you'll often find PM's have tunnel-vision - they just want to get their project out the door and into the clients hands - how their application fits into the grand plan is out of scope of their project (its someone elses problem).
So plan and implement your foundations (see The Great Pyramid of Agile) before the buzzword-compliant methodology comes into play or you might find yourself playing perpetual catchup and being forced into a position of recreating the mistakes of the past by forcing in quick fixes.
Another thing I picked up was that when allocating space for LUN's be sure to allocate twice the space you need to allow for Snapshots. This space requirement supercedes the default 20% allocated at the Volume level. For LUN based Snapshots the agent software on the host itself (eg SnapDrive for Windows or SnapDrive for Exchange) manages the Snapshot - it interacts with the SAN to ensure this happens properly but the SAN itself has no knowledge of whats inside the LUN.
What this means is that if every block in the LUN changes you need at least as much space again for the Snapshot or you'll get a disk-space error. Its unlikely this would occur - a situation in which it might would be a drive defragment which touched every block.
Its completely independent of NetApp but is an excellent place to ask questions or search for answers in the list archives.
A good overview of the list is here.
From a Solaris perspective there are a couple of really good guides that fill in the blanks between the Solaris & NetApp documentation:
* OpenSolaris and iSCSI: NetApp Makes it Easy
* iSCSI Examples
Schedule a job to mount your LUN's to the backup server and backup a SnapShot to tape from there. Requires a bit of scripting and tweaking but it should provide much more flexibility than trying to backup each server individually.
That way you can avoid being reamed by backup software vendors on a per host basis. You may still opt to do an NTBackup to file for servers and applications but the databases will reside on the SAN and get backed up to tape.
vif create multi multitrunk1 e0 e1Then to configure it do the usual:
ifconfig multitrunk1 [ip address] netmask [netmask address]And you can brink it up or down in the same way as any other interface. One important point to note is that if you do this from the console be sure to update /etc/rc & /etc/host to reflect the vif or you'll lose the interface after a reboot. The web interface does write this info to these files but its worth double-checking that the updates have been made.
esxcfg-vswif -a vswif0 -p Service\ Console -i 10.1.1.1 -n 255.255.255.0 -b 10.1.1.255
And don't forget to set the correct gateway in /etc/sysconfig/network or the command to configure the virtual switch interface will hang.
If this doesn't work chances are the interface already exists and it won't let you reconfigure it - so delete it first using - "esxcfg-vswif -d vswif0" and then re-run the above command.
ESX is very cool and they've made it pretty compelling in terms of a step up on the free Server (and older GSX) versions. It include user ACL, virtual switching, more efficient hypervisor (the RedHat 7.2 upon which ESX is based is stripped to the bare bones) and more granularity in terms of resource allocation. One of the things that isn't made very clear is that if you want to leverage some of the bells & whistles (eg High Availability, VMotion, Backup, centralised licensing) you'll need a SAN (or NAS in a pinch) and another box - ideally physical although it could be virtual (obviously you can't do HA or VMotion if your ESX instance hosting the management box dies though!).
The video is for 'Not Given Lightly' - I'm not usually a fan of sappy love songs but if you're going to do one then this is definitely the best way to do it - keep it simple, melodic, slightly twee and a little earnest.
Chris even wrote a self-deprecating article about this song 16 years after it was released.
For a less sweet side to his music check out Nothings Going to Happen and Turning Brown and Torn in Two.