At its re:Invent 2024 conference in Las Vegas, Amazon on Sunday announced a somewhat unusual new service for Amazon Web Services (AWS) customers: Data Transfer Terminal, a set of physical locations where customers can plug in their storage devices to upload data to the AWS cloud.
So how’s it work, exactly? From the AWS management console, customers can reserve a time slot, optionally assign process and data transfer specialists from their organization, and visit a Data Transfer Terminal location to upload their data.
“On your reserved date and time, [you’ll] visit the location and confirm access with the building reception,” Channy Yun, a principal developer advocate at AWS, explained in a blog post. “[You’ll then be] escorted by building staff to the floor and your reserved room of the Data Transfer Terminal location […] Don’t be surprised if there are no AWS signs in the building or room. This is for security reasons to keep your work location as secret as possible.”
The initial Data Transfer Terminal locations have been opened in New York City and Los Angeles, and more will be added in the future. Each location is equipped with a patch panel, fiber optic cable, and a PC for monitoring data transfer jobs.
A pilot AWS Data Transfer Terminal location. Image Credits:AWS
Now, why would someone want to lug all their hard drives to a building and sit around and wait for the upload to finish? Well, Amazon claims that Data Transfer Terminal delivers fast upload speeds (up to 400Gbps) via a secure, “high throughput” connection.
You’ll have to pay for the privilege, though. Amazon charges “per port hour” for usage of ports in Data Transfer Terminal locations during a reservation — even when no data is being transferred.
“At a minimum, you’ll be charged per port hour for the number of hours reserved,” reads an Amazon support page. “You’ll be charged for port hours for each port you use and/or request as part of your reservation.”
Per-port charges are $300 for “U.S. to U.S.” data transfers (i.e., uploads to a U.S.-based AWS data center) and $500 for “U.S. to EU” transfers (uploads to an EU region). Amazon doesn’t list the prices for transfers to the rest of the globe.
AWS CEO Matt Garman has harsh words for remote workers: return to the office or quit. The Amazon executive recently told employees who don’t like the new five-day in-person work policy that, “there are other companies around,” presumably companies they can work for remotely, Reuters reported on Thursday.
Amazon’s top boss, Andy Jassy, told employees last month that there will be a full return-to-office starting in 2025, an increase from three days for roughly the last year.
Garman is the latest tech CEO to put his foot down on remote work, but he’s not the first. Earlier this year, Dell reportedly told employees they won’t be considered for promotions if they don’t come into the office. That said, remote work likely isn’t going anywhere for most people. Studies suggest most remote workers would quit if they had to return to the office.
Amazon did not immediately respond to TechCrunch’s request for comment.
It was quite a surprise when Adam Selipsky stepped down as the CEO of Amazon’s AWS cloud computing unit. What was maybe just as much of a surprise was that Matt Garman succeeded him. Garman joined Amazon as an intern in 2005 and became a full-time employee in 2006, working on the early AWS products. Few people know the business better than Garman, whose last position before becoming CEO was as senior VP for AWS sales, marketing, and global services.
Garman told me in an interview last week that he hasn’t made any massive changes to the organization yet. “Not a ton has changed in the organization. The business is doing quite well, so there’s no need to do a massive shift on anything that we’re focused on,” he said. He did, however, point out a few areas where he thinks the company needs to focus and where he sees opportunities for AWS.
Reemphasize startups and fast innovation
One of those, somewhat surprisingly, is startups. “I think as we’ve evolved as an organization. … Early on in the life of AWS, we focused a ton on how do we really appeal to developers and startups, and we got a lot of early traction there,” he explained. “And then we started looking at how do we appeal to larger enterprises, how do we appeal to governments, how do we appeal to regulated sectors all around the world? And I think one of the things that I’ve just reemphasized — it’s not really a change — but just also emphasize that we can’t lose that focus on the startups and the developers. We have to do all of those things.”
The other area he wants the team to focus on is keeping up with the maelstrom of change in the industry right now.
“I’ve been really emphasizing with the team just how important it is for us to continue to not rest on the lead we have with regards to the set of services and capabilities and features and functions that we have today — and continue to lean forward and building that roadmap of real innovation,” he said. “I think the reason that customers use AWS today is because we have the best and broadest set of services. The reason that people lean into us today is because we continue to have, by far, the industry’s best security and operational performance, and we help them innovate and move faster. And we’ve got to keep pushing on that roadmap of things to do. It’s not really a change, per se, but it is the thing that I’ve probably emphasized the most: Just how important it is for us to maintain that level of innovation and maintain the speed with which we’re delivering.”
When I asked him if he thought that maybe the company hadn’t innovated fast enough in the past, he argued that he doesn’t think so. “I think the pace of innovation is only going to accelerate, and so it’s just an emphasis that we have to also accelerate our pace of innovation, too. It’s not that we’re losing it; it’s just that emphasis on how much we have to keep accelerating with the pace of technology that’s out there.”
Generative AI at AWS
With the advent of generative AI and how fast technologies are changing now, AWS also has to be “at the cutting edge of every single one of those,” he said.
Shortly after the launch of ChatGPT, many pundits questioned if AWS had been too slow to launch generative AI tools itself and had left an opening for its competitors like Google Cloud and Microsoft Azure. But Garman thinks that this was more perception than reality. He noted that AWS had long offered successful machine learning services like SageMaker, even before generative AI became a buzzword. He also noted that the company took a more deliberate approach to generative AI than maybe some of its competitors.
“We’d been looking at generative AI before it became a widely accepted thing, but I will say that when ChatGPT came out, there was kind of a discovery of a new area, of ways that this technology could be applied. And I think everybody was excited and got energized by it, right? … I think a bunch of people — our competitors — kind of raced to put chatbots on top of everything and show that they were in the lead of generative AI,” he said.
I think a bunch of people —our competitors — kind of raced to put chatbots on top of everything and show that they were in the lead of generative AI.
Instead, Garman said, the AWS team wanted to take a step back and look at how its customers, whether startups or enterprises, could best integrate this technology into their applications and use their own differentiated data to do so. “They’re going to want a platform that they can actually have the flexibility to go build on top of and really think about it as a building platform as opposed to an application that they’re going to adapt. And so we took the time to go build that platform,” he said.
For AWS, that platform is Bedrock, where it offers access to a wide variety of open and proprietary models. Just doing that — and allowing users to chain different models together — was a bit controversial at the time, he said. “But for us, we thought that that’s probably where the world goes, and now it’s kind of a foregone conclusion that that’s where the world goes,” he said. He said he thinks that everyone will want customized models and bring their own data to them.
Bedrock, Garman said, is “growing like a weed right now.”
One problem around generative AI he still wants to solve, though, is price. “A lot of that is doubling down on our custom silicon and some other model changes in order to make the inference that you’re going to be building into your applications [something] much more affordable.”
AWS’ next generation of its custom Trainium chips, which the company debuted at its re:Invent conference in late 2023, will launch toward the end of this year, Garman said. “I’m really excited that we can really turn that cost curve and start to deliver real value to customers.”
One area where AWS hasn’t necessarily even tried to compete with some of the other technology giants is in building its own large language models. When I asked Garman about that, he noted that those are still something the company is “very focused on.” He thinks it’s important for AWS to have first-party models, all while continuing to lean into third-party models as well. But he also wants to make sure that AWS’ own models can add unique value and differentiate, either through using its own data or “through other areas where we see opportunity.”
Among those areas of opportunity is cost, but also agents, which everybody in the industry seems to be bullish about right now. “Having the models reliably, at a very high level of correctness, go out and actually call other APIs and go do things, that’s an area where I think there’s some innovation that can be done there,” Garman said. Agents, he says, will open up a lot more utility from generative AI by automating processes on behalf of their users.
Q, an AI-powered chatbot
At its last re:Invent conference, AWS also launched Q, its generative AI-powered assistant. Right now, there are essentially two flavors of this: Q Developer and Q Business.
Q Developer integrates with many of the most popular development environments and, among other things, offers code completion and tooling to modernize legacy Java apps.
“We really think about Q Developer as a broader sense of really helping across the developer life cycle,” Garman said. “I think a lot of the early developer tools have been super focused on coding, and we think more about how do we help across everything that’s painful and is laborious for developers to do?”
At Amazon, the teams used Q Developer to update 30,000 Java apps, saving $260 million and 4,500 developer years in the process, Garman said.
Q Business uses similar technologies under the hood, but its focus is on aggregating internal company data from a wide variety of sources and make that searchable through a ChatGPT-like question-and-answer service. The company is “seeing some real traction there,” Garman said.
Shutting down services
While Garman noted that not much has changed under his leadership, one thing that has happened recently at AWS is that the company announced plans to shut down some of its services. That’s not something AWS has traditionally done all that often, but this summer, it announced plans to close services like its web-based Cloud9 IDE, its CodeCommit GitHub competitor, CloudSearch, and others.
“It’s a little bit of a cleanup kind of a thing where we looked at a bunch of these services, where either, frankly, we’ve launched a better service that people should move to, or we launched one that we just didn’t get right,” he explained. “And, by the way, there’s some of these that we just don’t get right and their traction was pretty light. We looked at it and we said, ‘You know what? The partner ecosystem actually has a better solution out there and we’re just going to lean into that.’ You can’t invest in everything. You can’t build everything. We don’t like to do that. We take it seriously if companies are going to bet their business on us supporting things for the long term. And so we’re very careful about that.”
AWS and the open source ecosystem
One relationship that has long been difficult for AWS — or at least has been perceived to be difficult — is with the open source ecosystem. That’s changing, and just a few weeks ago, AWS brought its OpenSearch code to the Linux Foundation and the newly formed OpenSearch Foundation.
We love open source. We lean into open source. I think we try to take advantage of the open source community and be a huge contributor back to the open source community.
“I think our view is pretty straightforward,” Garman said when I asked him how he thinks of the relationship between AWS and open source going forward. “We love open source. We lean into open source. I think we try to take advantage of the open source community and be a huge contributor back to the open source community. I think that’s the whole point of open source — benefit from the community — and so that is the thing that we take seriously.”
He noted that AWS has made key investments into open source and open sourced many of its own projects.
“Most of the friction has been from companies who originally started open source projects and then decided to kind of un-open source them, which I guess, is their right to do. But you know, that’s not really the spirit of open source. And so whenever we see people do that, take Elastic as the example of that, and OpenSearch [AWS’s ElasticSearch fork] has been quite popular. … If there’s Linux [Foundation] project or Apache project or anything that we can lean into, we want to lean into it; we contribute to them. I think we’ve evolved and learned as an organization how to be a good steward in that community and hopefully that’s been noticed by others.”