Data Center in A Box

Updates on DL980 with Westmere scales it to 80 cores & 4TB of memory

Citrix Now Supports XenApp on Amazon Web Services' Cloud

Citrix is using CloudFormation (think of almost OO, but not quite) to provide an almost turnkey service.

Today at Citrix Synergy we are pleased to announce Support for Citrix XenApp on Amazon Web Services' cloud platform. Companies of all sizes are now free to deliver their Windows applications and desktops from the largest, most scalable, public cloud, Amazon Web Services (AWS).

To make it easier for our joint customers to leverage the combination, we've also created some tools that make building Citrix farms on AWS easier. This is more than just a cosmetic upgrade from last year's XenApp 5 preview on AWS to XenApp 6, since the tools encompass additional technologies from Citrix's portfolio and embrace new services from AWS. Together these drive a significant improvement in flexibility, rapid provisioning and security.

To start, our reference design now incorporates AWS's new Virtual Private Cloud (VPC). This permits the use of highly configurable public and private networks just as you would in an enterprise datacenter. The majority of the XenApp 6 farm components run in the new Windows Server 2008 R2 AWS instances on our private network and only Xen App's SSL encryption Secure Gateway or Citrix Access Gateway are exposed on the public network. This further protects your confidential data with multi-tiered security controls.

We've also packaged reference Amazon Machine Instances (AMIs) for our customers' convenience with XenApp eval licenses and automated the process of creating new farms with AWS's new CloudFormation service.

Evaluating cloud based delivery of your Windows applications or desktops is now as simple as clicking a few buttons, waiting a few minutes for CloudFormation to run and logging into your new Citrix farm. More detailed, how-to information can be found here and here. A demo can be viewed here.

When you run XenApp on Amazon's cloud, there's no longer a need to incur capital costs or to wait for physical infrastructure to be constructed before you can deliver virtual applications and desktops to your users on nearly any computer, tablet or smartphone over any network.

http://community.citrix.com/display/ocb/2011/05/27/Citrix+Now+Supports+XenApp+on+Amazon+Web+Services%27+Cloud

The Essential CIO: Insights from the Global Chief Information Officer Study

The Essential CIO: Insights from the Global Chief Information Officer Study

http://public.dhe.ibm.com/common/ssi/ecm/en/cie03073usen/CIE03073USEN.PDF

Since our 2009 Global CIO Study, cloud computing shot up in priority, selected by 45 percent more CIOs than before and leaping into a tie for fourth place with business process management (60 percent each).

Both cloud computing and outsourcing are critical tools for CIOs to reallocate internal resources from routine system maintenance toward tasks that are most valuable to their organizations.

Although it was on the 2009 list, cloud computing took a giant leap. Now it has come of age, rising more than any other CIO priority.

Gigaom’s - The Top 50 Cloud Innovators

Although I don't necessarily agree with Gigaom's Lists but here it is...

By Derrick Harris and Stacey Higginbotham



In five short years, cloud computing has gone from being a quaint technology to a major catchphrase. It all started in 2006 when Amazon began offering its really Simple Storage Service and soon following up with its Elastic Compute service.

Just like that, the concept of on-demand, programmable infrastructure that could be accessed over the Internet became a reality. Infrastructure as a service has been talked about, alternatively in hushed and gushing tones. Grid computing, utility computing, on-demand computing — they were all ways to describe what Amazon Web Services had delivered.

Fast forward to today, when Amazon and others are moving at Internet speed, trying to offer better security, faster networking, more compliance and a host of other products that are attempting to meet the demands of startups, consumers and enterprises alike. It’s not perfect, as Amazon’s two-day outage earlier this year attests to, but it’s certainly good enough – and getting better.

We launched our Structure conference in 2008 because we saw the cloud-based infrastructure revolution was going to create new opportunities. As observers, we’ve talked to hundreds of people about cloud computing and its ecosystem. On our Structure channel, we cover the gear and software that comprises the cloud, the services and the people who are changing the industry.

Now for the first time, we’ve decided to condense that knowledge into the Structure 50, a list of the 50 companies that are influencing how the cloud and infrastructure evolves.

These are the ones to watch — at least in 2011. You’ve heard of some – such as Amazon or Dell. Others – such as Nicira or Boundary – are probably not yet on your radar. But they should be. All of these companies, big or small, have people, technology or strategies that will help shape the way the cloud market is developing and where it will eventually end up.

To the companies who made it on the list, congratulations. For others who missed out, in the future anything is possible. And for those who are still drawing their plans on a piece of paper, we are patiently waiting for you to change the world.

The Full lists at: gigaom.com

The Forrester Wave: PaaS For Vendor Strategy Professionals, Q2 2011

Platform-as-a-service (PaaS) offerings represent a critical space within the broader cloud ecosystem, as they provide the linkage between application platforms and underlying cloud infrastructures. In order to build a viable cloud market strategy, vendor strategists of independent software vendors (ISVs) and tech service providers need to understand their partnership opportunities in this area.

PaaS with Outsystems

Rumours has it that Outsystems is talking to and Dell and Cisco for providing IaaS in a global partnership to plug-in their offer.

Check the Outsystems Agile Platform video below:

HP Cloud Maps for McAfee

Navigate with HP Cloud Maps

HP Cloud Maps provide a technical guide to automate infrastructure and application provisioning and deployment.

The HP Cloud Map for McAfee MOVE is currently the only template published for Endpoint security and details how to automate the infrastructure provisioning and deployment for MOVE AV on HP Blade System architectures.

In addition, HP Information Security Services provide consulting and managed security services for MOVE deployments.

HP Cloud Maps for McAfee:

http://h71028.www7.hp.com/enterprise/us/en/partners/cloudmaps-mcafee.html

Big Opportunities in Big Data

Big Opportunities in Big Data

VCE Product Update

EMC World 2011, VCE Recap: It's All About the Vblocks, Baby...

I apologize that this post is so long in coming. Thanks to everyone who asked if I had more content coming, I’ll try and get everything dumped over the next couple days. I appreciate everyone’s patience!
-----

NewVblockCabinets_thumb1Day two of EMC World 2011 was definitely my favorite of the week. The trickle of VCE news from Monday turned into a flood and the interactions with customers, our parent companies and prospects were incredible. The amount of interest is incredible, and things are definitely steaming along.

On a day that saw 13 different notices put out by the EMC PR team, of course the one that made me the most happy was the announcement of the 300-Series Vblock™ platform and the rebranding of the “Type 2” Vblocks™ as the 700-Series. On the face of it, the refresh from the Celerra/CLARiiON to the VNX is pretty logical, and is keeping the platforms up-to-date with the latest and greatest from the parent companies, and that’s certainly part of it. But what this also signifies is how VCE is learning from their customers and partners, and working to streamline the process for everyone involved. Ultimately this is a great thing for the customer, but it is also designed to be very good for partners and for the parent company sales teams as well. Want more details, read on…

Originally, there were four iterations of the Vblock™ platforms, and they looked something like this:

OldSchoolVblocks_thumb1

While the four models did present a continuum, and each represented the ability to run a set number of standard workloads, the challenge was that the “buckets” were pretty large, and customers didn’t always fit neatly into them. There were also some initial requirements that came along with the Vblock™ that gave people the impression that it wasn’t a “flexible” platform, something that certainly wasn’t true. For customers who had use cases that didn’t fit easily into one of the four models, a process of approving “exceptions” was put in place to allow customers the flexibility they needed. While the customers weren’t impacted by this exception approval process, it created a ton of overhead internally. Essentially, by creating a small number of large buckets, the majority of the customer designs that were coming through required an exception, and that situation needed streamlining.

The VCE Platform Engineering team sat down and tried to answer the question “How do we make it so that 80% of the customer requirements can be handled without any exceptions to the base model configuration, yet maintain the flexibility to accommodate the 20% of customers who’s use cases are outside the norm?” With the introduction of the 300-Series and rebranding of the 700-Series, the public is getting the first glimpse as to how VCE will handle that.

The first step was to expand the number of “buckets” in order to more closely match the real-world use cases that VCE was seeing from customers. Using the VNX-family of storage arrays, the Type 0 and Type1/1U models have been replaced by the 300-Series, so the whole lineup currently looks like this:

NewSchoolVblocks_thumb1

Within the 300-Series, there are four different models, matching up with four of the EMC VNX arrays, each of which allows for both a block-only or unified storage configuration. Kenny Coleman out together a good table of the different models here. Graphically, the 300-Series can be represented like this:

300SeriesModels_thumb4

Along with the expanded number of models, and the simplified storage configurations, the 300-Series Vblock™ platforms will have a number of new features in the coming months, including:

  • The new UIM 2.1 Provisioning/Operations tool, allowing for streamlined provisioning of the infrastructure, creation of hardware-based service catalogs, and deeper insight and correlation between the VMs and vApps and the infrastructure layers themselves.
  • A new generation of drives, storage features and drive types, courtesy of the EMC VNX arrays, including
  • Smaller base configurations for customers who want to start small and grow, including
    • License-as-you-grow models for UCS chassis and related port licensing
    • Smaller UCS blade packs, allowing customers to grow in smaller increments
    • New RAID packs that simplify adding storage capacity
    • New DAE packs that let customers add disk shelves as they need them, not all at once
    • The 300EX model will support direct-attached storage, eliminating the need for additional switch gear for the smallest of environments
  • Significantly more storage performance, allowing the 300-Series to scale up into environments where a VMAX-based Vblock™ was the only option
  • All of the new and improved array features that the VNX introduces: FAST, FAST Cache, compression, Unisphere, the works...
  • The ability to leverage up to 50% of the blades in the system in a non-virtualized fashion. Out commitment to VMware remains as strong as ever, but the reality is that customers still have workloads that need to be run directly on the blades. While this flexibility was always available by exception, it’s now a base feature of the platform.
  • The new custom made Panduit racks are stunning (see the picture above), and (in my opinion) being able to remove the EMC-branded storage racks is a huge plus. Coming from a multi-tenant colocation environment, I was never happy with the exposed power switches or the security of the front cover plates. Of course some customers need to have the gear put into their own cabinets, and of course we can accommodate that as well…

The biggest question I’ve gotten so far is about the naming scheme. Obviously there are more changes coming to the full line of Vblocks™, and honestly the naming scheme itself should give you some insight into where we are going. While I can’t share any specifics without a VCE NDA in place, take a look at this, and draw your own conclusions…

Vblock.Next_thumb

If you ARE under a VCE NDA, please reach out to your sales team and ask for an updated roadmap presentation. Under NDA we can fill in a lot of the blanks that aren’t yet public, both on the infrastructure side as well as on the software and solutions side. There’s so much cool stuff happening, I wish I could share it all, but needless to say it’s going to be very good for both existing and new VCE customers, partners and parent company sales teams.

You may be asking, “Why are you spending so much energy on defining models and configurations?” The reason is because we firmly believe that there is incredible value in the product-ization of the infrastructure versus simply providing a reference architecture for customers and partners to build to. I saw someone on Twitter the other day comment that his company had used a reference design to build out an infrastructure design, and while the storage array had arrived he wasn’t expecting the rest for another month. In my experience, that isn’t how customers want to consume. They want to order, they want to know that their order is validated and that it’s going to work when it arrives and they want it to show up and be ready to use.

Vblock-ordering_thumb1

The only way to do this, and do it at scale, is to create “products” that companies can buy. Don’t get me wrong, this isn’t a lack of flexibility! In fact, being able to provide multiple models of Vblock platforms allows VCE to be able to standardize more of the delivery process while mapping directly to the customer’s requirements, and still retain the ability to approve non-standard configurations as always. This process also allows for a much better support process. There is a significant difference between a “one-call” support model and a “joint” support model. At VCE we have a support organization that is dedicated just to Vblock customers. Any issue, at any level of the infrastructure can be escalated directly and VCE will handle ticket creation, tracking, on-site tasking and further escalation, even if any of those require the involvement of one of the parent companies. We’ve also invested millions of dollars into a VCE-specific ticketing queue that is shared between the three companies. While this helps the parent companies to communicate directly with each other, it also ensures that VCE support is always aware of every ticket that has been opened involving a Vblock. Every serial number for every component in a Vblock that is sold to a customer is tagged within the parent companies ticketing system, and if any ticket is opened referencing one of those flagged serial numbers it will automatically put the ticket into the VCE queue. This means that customers who have a preference or process that would lead to them calling one of the parent companies directly, VCE will still always have visibility to the ticket, will always have the ability to get involved and help with escalation or dispatching and will always be able to provide informational updates on the status of that ticket. In addition to the support desk, we also have joint problem re-creation labs in multiple locations around the world, cooperative engineering groups, the ability to directly leverage the world-class field and support services organizations of EMC and Cisco and a full team of VCE employees to manage the process, including regional customer advocates that customers can interface directly with. Add in the strong and constantly growing partner ecosystem and you have an awful lot of resources that can be brought to bear.

Support_thumb1

This isn’t “joint” support enabled just because you have a support contract on the hardware, this is VCE Seamless Support that is only available for Vblock customers.

TMT_thumb2As stated from the beginning, the charter of the VCE company is to “Accelerate adoption of converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while improving time to market for our customers.” We do this by taking the best technology on the planet from our parent companies, productizing it in a way that takes the pain of integration, validation, support and design off of the customer, providing pre-tested upgrade paths, modular hardware upgrades, accelerated procurement windows and the only bundled, natively developed orchestration and operational management package on the market. Once that is in place we provide an incredible amount of reference material to show customers how they can deploy solutions on top of the infrastructure, both enterprise and service provider focused, and offer the only multi-tenant framework that addresses the nature of trust between tenants and providers, including compliance, reporting and isolation for both block and NAS configurations. There’s no where else that customers can get so much value from a multi-vendor infrastructure, and we aren’t done learning from our customers or raising the bar on their behalf. There’s so much more to come, I can’t wait to share it.

source: vmforsp.typepad.com

IDC Five Steps to Successful Integrated Cloud Management

IDC Five Steps to Successful Integrated Cloud Management

European Commission opens cloud consultation

19th May 2011

The European Commission (EC) has officially launched its public consultation into cloud computing – a survey designed to collect the opinions of individuals, businesses and public bodies across the continent, prior to the release of a European Cloud Computing Strategy next year.

The online questionnaire will form the basis of a number of elements related to European policy on cloud computing, and aims to gain clarity on the issues and barriers affecting the adoption of cloud infrastructure and services. 
As the consultation document specifies:

“The EU needs to become not only Cloud-friendly but Cloud-active to fully realise the benefits of Cloud Computing. Besides allowing for the provision of Cloud Computing in its various forms, the relevant environment in the EU has to address the needs of end users and protect the rights of citizens. At the same time, it should allow for the development of a strong industry in this sector in Europe.”

The survey forms one part of the Digital Agenda for Europe, headed by European Commissioner Neelie Kroes, who has been a long-standing champion of building a cloud computing industry in Europe, as opposed to relying on a market currently dominated by US cloud providers. Following the launch of the public consultation, Kroes stated:

"I am excited about the potential benefits of cloud computing to cut costs, improve services and open up new business opportunities. We need a well-defined cloud computing strategy to ensure that we make the best use of this potential. The input we are requesting from all interested parties is important to get it right."

“Cloud Computing represents a paradigm shift away from today's decentralised IT systems,” Kroes added. “It is already transforming providers of IT services and it will change the way other industrial sectors provision their IT needs as users, as well as the way citizens interact with their computers and their mobile devices.”

Last month Business Cloud News reported Kroes' statement that she believed cloud computing could be 'vital to Europe's growth', and the public consultation process is a clear indication that the EC wishes to get as much information about the new technology from across the continent before it outlines its strategy.
However, with so many private companies across the globe already introducing new cloud infrastructure to their IT services on a daily basis, questions over the time the consultation and strategy outlining process takes may yet be raised - especially with super-powers such as the US steaming ahead with both their private and public sector adoption.

The online consultation is open until 31 August and can be accessed here

Analyzing the Amazon Outage with Kosten Metreweli of Zeus

Analyzing the Amazon Outage with Kosten Metreweli of Zeus

By Dan Kusnetzky

Summary

If organizations had done the right things, Amazon’s outage would have been a momentary irritation not a disaster. Why didn’t Amazon customers have a plan “B”?

During the marketing feeding frenzy after Amazon’s small outage a while ago, I had the opportunity to speak with Kosten Metreweli, Chief Strategy Officer for Zeus, about what happened, how folks were hurt and what can be done to prevent such occurrences from causing pain in the future.

Even though Amazon offered ways for customers to set up shop in several different data centers, or Zones as Amazon calls them, many didn’t have plans that included use of alternative data centers; back up and recovery of critical data; and methods to detect a failure and redirect traffic to other resources. Since the technology to manage such outages has been available for ages, why didn’t we see evidence of planning for an Amazon outage.

Here is a summary of some of the steps that could have prevented a great deal of the pain (and opportunity for suppliers to market their products and services):

  • Organizations could have taken a page out of the planning and operational processes they already use in the mainframe, midrange and X86-based system workloads and have hosted critical applications in several places with a workload manager routing traffic to systems having the most available capacity.
  • They could have done the same thing with cloud-based storage.
  • They could have routinely tested failover processes by “unplugging something” to see if their processes really worked.

Since we heard so many stories about companies losing access to critical applications and data during Amazon’s outage, it is clear that the IT planners must not have been involved in the use of Infrastructure as a Service products.

  • This may have been because business, not IT, decision makers made the choice to use Amazon and didn’t ask for help. This may have been due to these decision makers purposely going around IT to get things done that were always “2 Years” away in IT’s development plans.
  • It may also be due to them not knowing how to read Amazon’s terms and conditions. They might have believed that Amazon was going to do more to backup their data and have disaster recovery plans and procedures in place even though the Ts and Cs state those things are a customer’s responsibility (unless they purchase specific Amazon services.)
  • They didn’t know anything about redundancy, workload management, back up servers, multi-tier storage and the like and so didn’t think about it.

All in all, this incident showed that Cloud computing environments, like on-premise IT infrastructure, needs to be carefully architected, implemented and operated. Tools such as those offered by Zeus and others should have been baked in from the beginning.

source: http://www.zdnet.com/blog/virtualization/analyzing-the-amazon-outage-with-kosten-metreweli-of-zeus/3069

Save Your Files On The Cloud Automatically

Save your files on the cloud and automatically back up and sync them among all your computers, mobile devices and the cloud.

Get 2GB free or purchase more storage.
http://db.tt/ScGG03Y.

IT Pro Perceptions on Why Oracle Dropped HP Integrity

Interesting research on “450 Enterprise IT Pros” and their perception as to why Oracle dropped HP Integrity:

http://gabrielconsultinggroup.com/

Customers in our recent survey of IT professionals have told us why they think Oracle discontinued porting to Itanium. But what do they think the company will do next? Here are their top responses.

The alternative choice - there is no next step, because there's no overall strategy - came in dead last. Of particular note is 'next step' #3: only 8% of the 450 customers we surveyed don't think that being forced onto an all-Oracle solution is in the cards.

You can read the full report, "What's Oracle's Next Move?", or download it from the ‘Recent Research’ section of our website here. Find out how we conducted the survey and who our respondents are here.

Amazon Cloud Used to Hack Sony

Interesting articles which emphasizes online security concerns:

http://www.bloomberg.com/news/2011-05-13/sony-network-said-to-have-been-invaded-by-hackers-using-amazon-com-server.html

"Sony Network Said to Have Been Invaded by Hackers Using Amazon.com Server" (May 14, 2011)

All it takes is one incident to cripple a company's online reputation:

"Sony offered customers a free year of identify-theft protection after its PlayStation Network and Qriocity entertainment networks were crippled by the attack. Thieves may have stolen credit-card, debit records and other personal information from customers of Sony Online Entertainment, a third service. The New York Attorney General's office has subpoenaed Sony, according to a person familiar with the probe."

Also -

http://www.theregister.co.uk/2011/05/14/playstation_network_attack_from_amazon/

"Bloomberg doesn't say how Amazon's cloud service was used to mount the attack. If the report is correct, it wouldn't be the first time it's been used by hackers.

German security researcher Thomas Roth earlier this year showed how tapping the EC2 service allowed him to crack Wi-Fi passwords in a fraction of the time and for a fraction of the cost of using his own computing gear. For about $1.68, he used special "Cluster GPU Instances" of the Amazon cloud to carry out brute-force cracks that allowed him to access a WPA-PSK protected network in about 20 minutes.

Microsoft BPOS Cloud Major Outage

Looks like recent Amazon outage is not the only online service experiencing some growing pain these days.

Reference:

May 13, 2011 - Microsoft BPOS cloud outage burns Exchange converts - 'All in' now looking for a 'way out'"

http://www.theregister.co.uk/2011/05/13/microsoft_bpos_apology/

May 12, 2011 - Microsoft's BPOS cloud customers hit by multi-day email outage

http://www.zdnet.com/blog/microsoft/microsofts-bpos-cloud-customers-hit-by-multi-day-email-outage/9436

http://blogs.channelinsider.com/content001/cloud_computing/this_weeks_cloud_outage_microsoft_bpos.html

"May 13, 2011 - Microsoft's BPOS outage comes at an unfortunate time for the software giant. The company is getting set to launch the next version of BPOS, Office 365, next month. And this outage comes on the heels of a major Amazon.com cloud outage that brought down several very popular websites."

Migration to the cloud starts with development & test

Right Scale released a white paper titled Migration to the Cloud starts with Development & Test to highlight this trend in many accounts.

JUST FOR THOUGHT: A large number of customers are not interested in a public cloud offering, but rather would like to provide dev. & test services behind the firewall.

Cisco enters the POD marketplace

Cisco announced last week it enters the portable containerized data center market with a fully configured data center solution in a purposed- built 40 foot shell. It includes 16 data center racks each supporting 25KW of power.

The Cisco containers have a Power Usage Effectiveness (PUE) of 1.05 to 1.30, where typical datacenters are in the 1.6 to 2.0 rate. J

ust a reminder, the PUE, which actually corresponds to the ratio of the total data center consumption and the consumption of the actual equipment, of HP’s Wynyard (UK) datacenter is 1.19.

See feedback from InformationWeek, Ubergizmo, PCWorld, eWeek and Network World.

Also HP has been in this market for quite a while and even has POD-Works, a POD manufacturing facility.

Compare and know more on HP’s offering, check here.

Red Hat cloud platform challenges VMware, Microsoft

The beta release of OpenShift uses Amazon as a platform for building interoperable cloud workloads

By Charles Babcock, InformationWeek

Red Hat's first shot at cloud computing was too painful for customers, needing either deep open source expertise or a consulting contract with Red Hat. Now, Red Hat's coming out with something more promising, a platform-as-a-service called OpenShift that uses tools familiar to many open source developers.

It's an approach that has a chance--with some important caveats. Among other things, it's taking on platforms from Microsoft and VMware.

In addition, Red Hat said at its annual user group Summit in Boston Wednesday that it is not trying to target one type of cloud but produce standardized workloads that can be exported to either a company's private cloud or a variety of public clouds, including Amazon Web Service's EC2. In another departure, it's trying to shift the focus of cloud building away from creation and management of virtual machine workloads to the creation and management of cloud applications that endure for a long lifecycle. That lifecycle would include frequent alterations to match changing business needs.

With these moves, Red Hat is trying to differentiate itself from VMware and Microsoft. It's trying to offer a more open development platform compared with those two vendors, which are the leading contenders for companies building private and hybrid cloud architectures. VMware and Microsoft have each provided extensive platforms to support their virtual environments. Each results in some degree of lock in.

Red Hat is one of the few companies that could bring this combination of elements to do-it-yourself cloud building. It has a backbone constituency of Enterprise Linux users throughout the corporate world and is the only open source company with revenue that are expected to cross the 1 billion dollar mark for the first time this fiscal year. Perhaps the second most successful open source company is MySQL AB, which was sold in 2008 to Sun Microsystems for 1 billion dollar, and its annual revenue was less than 200 million dollar, by most accounts.

Red Hat is moving far beyond its previous "support" for cloud builders, represented by its announcement of Cloud Foundations at last year's Red Hat Summit in Boston. Cloud Foundations was a stack built on Enterprise Linux and JBoss middleware but, like other cloud products, was focused on building virtual machines suitable for Amazon or other targeted VM environments.

OpenShift provides an application development environment initially hosted on Amazon's EC2 cloud. Developers may work there in a variety of open source languages, including Ruby, Python, and PHP, and then target the application for deployment in the cloud. For now, the default deployment environment is also EC2 or an organization's internal cloud data center.

But in talking to Red Hat CTO Brian Stevens, he says that OpenShift environment supplies standardized APIs for cloud services, and "with a one line command," the application can be deployed to a different cloud "without figuring out all the nuances" of the target environment. This approach hinges on developers adopting Red Hat's recommended Deltacloud APIs, and at least some public cloud suppliers supporting them. More typically, a developer would need to master the details of a target's API set.

Red Hat has been an advocate of its Deltacloud interoperability API standard and has submitted it to the DMTF standards body. But Red Hat hasn't published even a short list of cloud environments that support Deltacloud. In addition, there are competitors with open APIs, such as Simple Cloud API sponsored by Zend Technologies, IBM, and Microsoft; or open source versions of EC2's APIs, supplied by Eucalyptus. The OpenStack project is creating another set of open source cloud APIs and has submitted them to DMTF as well.

Red Hat is making a bet on Deltacloud APIs, saying that by building services calls to its neutral Deltacloud APIs, cloud developers at least have the option of finding conversion services to a cloud of choice or using those clouds that have chosen to recognize it.

OpenShift expands on the platform-as-a-service that Red Hat acquired last November with its purchase of Makara, a startup offering an application building platform on EC2. Stevens says that Red Hat has adopted the Git deployment engine that comes out of the Linux kernel development process. Linus Torvalds wrote Git, a directory of a developer's files with a repository for tracking each version created. Red Hat offers a hosted version for joint cloud projects with multiple developers. Stevens says it can push code out to a targeted recipient as well as track source code changes, which gives developers an automated deployment tool once an application is built.

Unlike predecessors VMware, Citrix Systems, or Eucalyptus Systems, Red Hat is separating formation of private and hybrid clouds workloads from their dependence on virtual machines. Heretofore, cloud vendors have assumed the private cloud will be a heavily virtualized environment, but there is no reason it has to be. Virtualization vendors just tend to assume it will be. Red Hat is shifting the focus more toward application management, even if the application runs on bare metal--a server with nothing but the application and its operating system.

In the end, Red Hat has got it right. Cloud computing is more about applications than virtualization. As a practical matter, most implementers are pursuing the private cloud as a step toward greater efficiency and the most efficient internal servers have been heavily utilized virtual ones. In the future, the private cloud setting is likely to be a mix of virtualized and unvirtualized servers.

Before, building a private cloud with Enterprise Linux, JBoss middleware, and the KVM hypervisor required deep open source expertise or consulting services from Red Hat, Stevens admits. Now the JBoss middleware exists as an easily invoked service in the background of OpenShift application building.

The platform supports use of different frameworks, including VMware's Spring Framework, Ruby on Rails, Zend Framework for PHP, JBoss Seam, Rack (an interface for Ruby web servers), the Symfony PHP framework, and Java Enterprise Edition 6.

With Microsoft approaching cloud application development from the strength of its Visual Studio tools in the Azure cloud, and VMware approaching it from its virtual machine management, it's time for a third major option appealing to the open source corner of the universe. Stevens contends that that's Red Hat's OpenShift, available in beta with delivery of a 1.0 version to come later this year.
As a result, building workloads to run in the private cloud with the potential to move into the public cloud may have gotten a little easier. There are still miles to go before OpenShift can become the kind of generally interoperable cloud platform that Red Hat talks about. The most troubling issue is who, besides Red Hat, supports Deltacloud APIs, given the proliferation of competing APIs. But a comprehensive open source effort has got to start somewhere. Red Hat has made its best effort at launching that platform.

Detailed report on Telecom Software Market by Analysys Mason

Analysys Mason Dec 2010- Telecoms Software Market Shares and Forecasts

Vendors Seize Emerging Mobile Cloud Service Opportunities

Enterprises of all sizes must support a multitude of mobile devices and applications that run over a wide range of mobile operating systems, as well as update devices and applications every few months. In addition, empowered employees are circumventing . . .

The SaaS Market Hits Mainstream: Adoption Highlights 2011

Forrester recently surveyed more than 1,000 firms across North America and Europe to better understand software-as-a-service (SaaS) adoption and the business drivers for SaaS usage. We found that adoption continues to grow across both horizontal categories . . .

IT Infrastructure And Operations: The Next Five Years


Interest in cloud technology and cloud economics abounds. While the technology delivers immediate reductions in capital costs, Forrester believes that cloud computing's greatest benefits will come from changes to the IT technology and organizational model.

Cisco Enters Portable Containerized Data Center Market

By Joseph F. Kovar, CRN

Cisco Monday entered the containerized data center market with a fully configured data center solution in a purpose-built 40-foot shell based on its networking and Unified Computing Systems technology.

Cisco's containerized data center includes 16 data center racks, each supporting 25 KWh of power, in all-new ISO-standard steel shipping containers, said Keith Siracuse, Cisco product marketing engineer.

The new containerized data centers target customers who need quick deployment, Siracuse said. "Brick-and-mortar data centers take two years to build," he said. "The Cisco containerized data center takes 120 days from the day the order is cut to get it to the customer site."

Portable containerized data centers are nothing new; Hewlett-Packard, Dell, SGI and Oracle via its Sun acquisition offer them as well. And they are more efficient than traditional data centers, Siracuse said. For instance, the Cisco containers have a Power Usage Effectiveness (PUE) of 1.05 to 1.30, which he said is the ratio of total power used by the data center compared to the power used to run the server, storage, switch and IT equipment. Brick-and-mortar data centers have a PUE rating of 1.6 to 2.0, he said.

Investment in containerized data center solutions is also easier in that they typically depreciate over seven years compared to the typical 20 years of a brick-and-mortar data center, said Brian Koblenz, CTO for Cisco's new modular data centers.

Cisco's containerized data centers feature Cisco networking combined with technology from multiple partners, Siracuse said. "This is not just routers and switches," he said. "This is like building a normal data center. We provide the switches and routers, and work with partners to add the infrastructure such as the UPS, chillers and so on, just like anyone does with brick-and-mortar centers."

Cisco's containerized data centers are available for order through Cisco. Cisco is working with two systems integrators with extensive data center integration experience, Siracuse said. "Cisco is going to market with master integrators," he said. "Our channel partners can work with the master integrators. Solution providers can integrate customers' equipment into the containers, while the integrators handle the infrastructure."

The units can be configured with Cisco's UCS data center technology, which ties server, storage and networking into a single architecture. Customers can also order them configured with a vBlock storage architecture from VCE, the EMC-Cisco-VMware venture that builds storage infrastructures for virtualized and cloud environments, or with a NetApp FlexPod Modular Data Center Solution, he said.

Siracuse said Cisco built its containerized data center from the ground up with operational and cost efficiencies in mind. Each rack is individually cooled by bringing cool air from the top and circulating it downward instead of using a traditional hot aisle/cold aisle arrangement. Furthermore, chilled water is circulated under the floor of the container instead of from above to decrease the risk of leaks, he said.

All maintenance of the racks in the containers is done inside, with no doors on the sides of the containers included, he said. A track running down the center aisle makes it easy to use an optional dolly for moving equipment, he said.

Cisco is also including the option to either bring data and power cables through the front door of the container or up through the floor. "The government is interested in this feature because of the extra security," Siracuse said.

http://www.crn.com/news/data-center/229402578/cisco-enters-portable-containerized-data-center-market.htm