Amazon Web Services (AWS) has slowly and silently phased out its Snowmobile service—an offering launched at its annual AWS re:Invent conference in 2016 to help enterprises move data from their on-premises servers to the cloud provider’s data centers to accelerate their migration to the public cloud.
The Snowmobile service, essentially an eighteen-wheel truck and trailer or “big rig” with 100 petabyte data storage and network connectivity, was commissioned by AWS then-CEO Andy Jassy (now CEO of Amazon) to help enterprises who wanted to transfer vast amounts of data, measured in the petabytes or exabytes.
Typically, Snowmobile was a ruggedized, tamper-resistant shipping container that was 45 feet long, 9.6 feet high, and 8 feet wide. A Snowmobile unit included a network cable connected to a high-speed switch that was capable of supporting 1 terabyte per second of data transfer across multiple 40 gigabit per second connections.
“Assuming that an enterprise’s existing network can transfer data at that rate, it could fill a Snowmobile in about 10 days,” the company said in a blog post at the time.
Other properties of the container included water resistance and climate control, but these properties required boosting it with 350 kW of AC power. If the customer didn’t have sufficient power capacity on-site, AWS would arrange for a generator to run the unit for data transfer.
The service targeted companies in financial services, media and entertainment, and scientific sectors, among others, with the need to transfer financial records, film vaults, satellite imagery, or other scientific data, AWS said.
Data moved via Snowmobile was destined for either AWS’s S3 storage service or its Glacier storage service.
Why has Snowmobile been phased out?
The short answer is the rapid evolution of technology in the last eight years, which resulted in more efficient and affordable data transfer options than a truck driving across the American heartland.
“Since we introduced Snowmobile in 2016, we’ve released many other new services and features which have made migrating data to AWS even faster and easier for our customers,” an AWS spokesperson said in reply to an email inquiry.
“We couldn’t be more proud of the value that Snowmobile has brought to customers, and we’re pleased to see them choosing newer, more efficient technologies like AWS DataSync to bring their data to AWS,” the spokesperson said.
AWS DataSync, introduced in 2018, is an online service that automates the movement of data between on-premises data centers, edge locations, AWS storage services, and other cloud providers.
DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File System (HDFS), object storage systems, AWS Snowcone, Amazon S3 buckets, Amazon Elastic File System, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP file systems, according to the company.
In July of last year, AWS added support for moving data to and from Azure Blob Storage to the DataSync offering.
Alternatively, enterprises can use AWS Direct Connect, first released in 2011 and later expanded to all regions, to bypass the internet and create dedicated 1 Gbps, 10 Gbps, or 100 Gbps connections to AWS for data transfer. AWS claims that in many circumstances, private network connections can reduce costs, increase bandwidth, and provide a more consistent network experience than internet-based connections.
Online beats offline for data transfers
Another reason for phasing out the Snowmobile service, AWS said, is that its enterprise customers have started preferring online data transfer over offline transfer.
Behind the shift is the more cost-effective nature of online offerings, especially when compared to massive, diesel-powered alternatives such as the Snowmobile. Nevertheless, AWS still provides offline services for enterprises that lack sufficient bandwidth.
One such option is the Snowball Edge, an edge computing device that is more cost-effective than the Snowmobile due to its higher usable capacity, lower per-device cost, smaller form factor, and shorter turnaround time. Snowball Edge ranges in capacity from 80 terabytes to 210 terabytes, and is available under various pricing and usage plans.
A second option is the Snowcone, an edge computing device designed for smaller data transfers. A disk drive version provides 8 TB of available storage while the SSD version provides 14TB, AWS said, adding that both models run specific Amazon EC2 instances with 2 CPUs and 4 GB of available memory to support applications and AWS IoT Greengrass functions. Snowcone also offers multiple pricing options, ranging from per day to monthly fees.
The data collected by Snowball Edge or Snowcone Edge can be physically shipped to AWS or transferred online via AWS DataSync.