The recent launch of Virtual Internet in the United States, offers a moment to pause and consider activity in the U.S. online e-commerce sector.
This is one arena that is perfectly suited for cloud computing, allowing enterprises to quickly scale up or down during seasonal spikes in activity.
According to U.S. Commerce Secretary Gary Locke, the world does an estimated $10 billion of business online, including:
• Consumers pay their utility bills from their smart phones;
• People download movies, music and books online; and
• Companies, from the smallest local store to the largest multinational corporation, order goods, pay vendors and sell to customers via the Internet.
The Internet Retailer reports that U.S. e-commerce sales spiraled upwards by nearly 15% between 2009-2010, with sales totals reaching $165 billion!
The estimates are based on quarterly surveys of more than 11k U.S. merchants conducted by the U.S. commerce department.
Meanwhile Comscore reports that about 50% of all computers and 30% of all consumer electronics brought in the U.S. are now purchased online.
Further, 31% of U.S. mobile phone subscribers or 72 million consumers now purchase online using their Smartphones. Between March 2010 and March 2011 people visiting an online store from a mobile device increased by 90%. This is a staggering increase.
Being constantly connected, means consumers are also now decreasing time spent in stores, even for perishable items. According to Comscore, 12% of Internet users say they have bought grocery items online.
The E-Commerce Times reports that the average America credit cardholder carries 3.5 credit cards, which they are using to buy both large-ticket and small-ticket items. In short, this growing reliance on credit cards by consumers demonstrates the importance of protecting these numbers.
This makes the PCI DSS standards developed in collaboration between MasterCard, Visa and American Express a critical goal for all merchants involved in rising e-commerce activity.
“All merchants and service providers who store, process and transmit credit card information must undergo quarterly self-assessments as well as audits (vulnerability scans) by an Approved Scanning Vendor (ASV) in accordance with PCI DSS Scanning Procedures,” said The E-Commerce Times.
One interesting point brought up in the article centered on broad system protection, not just for databases, but other tools such as Sharepoint sites that house spreadsheets and documents containing sensitive data.
Bloomberg news reports that in 2010, "companies lost about $37 billion to online fraud or theft, and 8.1 million U.S. adults had their identities stolen."
Further, the same report suggests that the U.S. government plans to spend $56.3 million on technology aimed at safeguarding the online marketplace and those who operate in it, including consumers, businesses and government agencies. We recently published an interview with UK-based PalmTree which offers LiveEnsure, a cloud-based SaaS mash-up authentication service, which goes beyond traditional token-based services. Expect to see this technology become more prevalent over the next few years.
Research agencies predict that 80% of all e-commerce activity will be done in the cloud by 2020 which raises concerns regarding protecting data in virtualized environments spread over multiple locations.
And, security was the focus on recent guidelines released by the PCI DSS Standards body which sought to clarify security issues in virtualized environments that form the basis of cloud computing. Read the latest PCI DSS Standard's update here!
With powerful, secure data centers delivering private and and public clouds to enterprise customers, Virtual Internet puts a premium on meeting ISO 9001 and 27001 standards, all designed to protect and safeguard your data.
Our United States cloud centers are officially open for business!
Back in March we published a post about how pci compliance standards are reshaping security in the cloud. The big question mark was how the Standard's Council would define guidelines around virtualization via its special interest group, which was pondering the problem.
On June 14, they finally published an outline, which will influence security in the cloud-computing arena and specifically practices around management of payment card data.
"While virtualization may provide a number functional and operational benefits, moving to a virtual environment doesn’t alleviate the risks which existed on the physical systems, and may also introduce new and unique risks," said one of their guidance documents.
"Consequently, there are a number of factors to be considered when implementing virtual technologies, including but not limited to those defined below."
More than 30 participating organizations helped formulate the guidance documents, which helps merchants, service providers, processors and vendors understand how PCI DSS applies to virtual environments including:
• Explanation of the classes of virtualization often seen in payment environments including virtualized operating systems, hardware/platforms and networks
• Definition of the system components that constitute these types of virtual systems and high-level PCI DSS scoping guidance for each
• Practical methods and concepts for deployment of virtualization in payment card environments
• Suggested controls and best practices for meeting PCI DSS requirements in virtual environments
• Specific recommendations for mixed-mode and cloud computing environments Guidance for understanding and assessing risk in virtual environments
The body concluded that there is no single method for securing virtualized systems. Some of the general recommendations included:
• The flow and storage of cardholder data should be accurately documented as part of this risk assessment process to ensure that all risk areas are identified and appropriately mitigated.
• Designing all virtualization components, even those considered out-of-scope, to meet PCI DSS security requirements will not only provide a secure baseline for the virtual environment as a whole, it will also reduce the complexity and risk associated with managing multiple security profiles, and lower the overhead and effort required to maintain and validate compliance of the in-scope components.
• When assessing physical controls, consider the potential harm of an unauthorized or malicious individual gaining simultaneous access to all VMs, networks, security devices, applications, and hypervisors that one physical host could provide. Ensure that all unused physical interfaces are disabled, and that physical or console-level access is restricted and monitored.
• The body said all players should consider how security could be applied to protect each technical layer, including but not limited to the physical device, hypervisor, host platform, guest operating systems, VMs, perimeter network, intra-host network, application, and data layers. Physical controls, documented policies and procedures, and training of personnel should also be a part of a defense-in-depth approach to securing virtual environments.
• Preventive controls such as a network firewall should never be combined on a single logical host with the payment card data it is configured to protect. Similarly, processes controlling network segmentation and the log- aggregation function that would detect tampering of network segmentation controls should not be mixed.
• Accounts and credentials for administrative access to the hypervisor should be carefully controlled, and depending on the level of risk, the use of more restrictive hypervisor access controls is often justified.
The report went on to recommend a number of additional directives to securing a virtualized environment. One particular telling statement related to concepts around IaaS, PaaS and SaaS.
Cloud computing also encompasses several types of services, including IaaS, PaaS, and SaaS. Each type of service represents a different assignment of resource management and ownership, which will vary depending on the specific service offering.
For example, an entity subscribing to an IaaS service may retain complete control of, and therefore be responsible for, the ongoing security and maintenance of all operating systems, applications, virtual configurations (including the hypervisor and virtual security appliances), and data. In this scenario, the cloud provider would only be responsible for maintaining the underlying physical network and computing hardware. In an alternative scenario, a SaaS service offering may encompass management of all hardware and software, including virtual components and hypervisor configurations.
In this scenario, the entity may only be responsible for protecting their data, and all other security requirements would be implemented and managed by the service provider.
In its final remarks, an information supplement to the guidelines stated that the lack of virtualization industry standards has resulted in a number of vendor-specific best practices and recommendations that may or may not be applicable to a particular environment.
The report thus remains a 'guidance' document and still leaves a number of implementation practices up to the vendor and/or merchant.
Virtual Internet (VI), a major UK provider of business hosting solutions, is officially launching their flagship cloud-hosting product . With recently upgraded VMwaretm cloud servers increasingly outselling conventional hosting solutions, VI’s business-class cloud solutions deliver reliability, cost savings, scalability, and performance.
According to Forrester Research, nearly 70% of an average IT budget is spent on maintenance of existing infrastructure, reducing the resources allocated to innovation and strategic planning. Extremely flexible and fail-safe, cloud hosting creates a fully automated workflow with hosting resources dynamically provisioned when and where they are needed. As a self-healing virtual infrastructure, cloud hosting delivers continuous network uptime, whilst allowing freedom of management of operational systems, load balancing, and overall resource utilization.
“The bottom line is that cloud hosting is the most efficient hosting solution today and we're happy to deliver our solutions to the US market. This is the future of web hosting,” said Patrick McCarthy, VI’s Managing Director.
VI has created a hardware-based cloud that is fully hardware and network redundant including redundant hardware firewalls. The cloud is sandbox-ready and production-ready the instant customers sign up for a customized SLA. They can hit the ground running in the knowledge that their applications are powered by the finest hypervisors in the cloud.
“Having an advanced and flexible hosting solution is an indispensable part of a solid business plan. The hosting provider you choose will be the backbone of your business and VI has been that vital framework for numerous customers since 1996” remarked McCarthy. “We invite US businesses to experience a new level in cloud web hosting, with industry-leading SLAs and 24/7 customer support.”
Customers’ data is hosted at a 10,000 square foot datacenter in Lindon, Utah. The state-of-the-art facilities are SAS 70 compliant with Tier 3 classification.
For information or a quote, visit www.vi.net or call 877 358 3819.
ABOUT VIRTUAL INTERNET
Since 1996 Virtual Internet has been keeping an innovative finger on the pulse of web hosting technology. It was the first host to provide both Xen and VMware private and public cloud servers to the UK enterprise market. Now the company has launched "on-demand" VMware private and public cloud hosting services to the US market at a fraction of the traditional cost.
Intel is incorporating hardware-assisted virtualization technology that goes beyond the server consolidation phase typified by groundbreaking software released by VMware in 1999 and others that came later such as Onapp Xen and Linux KVM.
They call this phase Virtualization 2.0 representing fundamental advances in how cloud data centers deal with load balancing, high availability and disaster recovery.
Some of their specific inventions include Virtual Machine Device Queues (VMDQs), which offer a 2x performance in network throughput than those usage models relying solely on software for virtualization.
According to Intel, the above translates into a 9GBS throughput versus a4GBS found in software-only solutions. These VMDQs reduce virtual machine overhead in a particular server host and boost performance.
They then work closely with vendors such as VMware or Redhat to make sure their latest software can leverage these hardware innovations.
Managed cloud host providers such as Virtual Internet make use of both Intel’s hardware and vendor software from VMware to deploy highly scalable Infrastructure-as-as-Service (IaaS) to enterprise customers in the UK and US.
Intel closely studies the data center strategies of web hosts such as Virtual Internet to build usage models, which tap into the power of virtualization 2.O.
For Intel, their research is focused on these three particular usage models, which go well beyond the initial server consolidation phase that has been used in cloud computing so effectively.
Dynamic load balancing
A Virtualization Manager migrates virtual machines from a busy host to an idle host in the network. This may be a result of heavy traffic in the network as e.g. enterprise staff or customers login into applications in the morning versus gentler traffic by lunchtime. These spikes require careful management of virtual machine resources across various nodes in the network.
If a particular host server deploying multiple virtual machines goes down the technology needs to assist in automatically starting up the VMs on another node in the network with minimal downtime. This is often referred to as the ‘self-healing’ power of the cloud typified in a web hosting data center.
Site-to-site-recovery is a very specific usage model and considered an advanced model enabled by the tenants of virtualization. In this scenario, a cluster of servers powering virtual machines on a network is replicated at another site, which is dynamically updated on a regular basis. Generally, if an outage had to occur in the primary data center it would take a bit longer to get up and running on the slave due to the distance involved.
One of the interesting breakthroughs in Intel hardware-assisted virtualization is something referred to as power optimization. It is considered the inverse of load balancing described above. Picture two physical servers each running a collection of virtual machines. If Server B is under utilized, perhaps only running two virtual machines, it is possible to live migrate those two VMs to Server A and power down Server B. The result is a conservation of energy, power and costs by shutting down the cannibalized physical server B.
Thus, while load balancing is concerned with a server being over-utilized potentially delivering slower service, power optimization focuses on maximizing operating costs and making sure all resources in the network are being used correctly.
In conclusion, as more and more mission critical applications are pushed into the virtualized arena inside the cloud-computing model, it becomes critical to optimize the underlying hardware represented by IaaS
This Virtualization 2.0 phase requires that both the hardware and the software work together to deliver the full power of Infrastructure-as-as-Service (IaaS) to enterprise customers seeking to move their data centers into the cloud offered by providers such as Virtual Internet.
Check out these fun, introductory videos which illustrate the evolution of virtualisation software and its use in the cloud computing business and operating model.
Virtualisation is the software underlying on-demand cloud services, allowing enterprises to consolidate several servers into one.
By the year 2020, 80% of all computing is expected to take place within the cloud. Currently service providers are aggressively positioning new private and public cloud offering to enterprise customers within the Iaas, PaaS and SaaS layers.
Generally, clients have the option to move forward with entry-level VPS services or consume more powerful IaaS platforms which offer network admins root access to the underlying server.
The utility pricing model of a cloud allows an IT manager to incrementally migrate on-premise datacentre servers (and apps) to remote web hosting companies at lower price points.
The ability to scale up and down on services simplifies datacentre operations and contributes to agile software development.