The opening panel at Seattle AI Week 2025, from left: Randa Minkarah, WTIA chief operating executive; Joe Nguyen, Washington commerce director; Rep. Cindy Ryu; Nathan Lambert, Allen Institute for AI; and Brittany Jarnot, Salesforce. (GeekWire Photo / Taylor Soper)
Seattle is looking to celebrate and accelerate its leadership in artificial intelligence at the very moment the first wave of the AI economy is crashing down on the region’s tech workforce.
That contrast was hard to miss Monday evening at the opening reception for Seattle AI Week 2025 at Pier 70. On stage, panels offered a healthy dose of optimism about building the AI future. In the crowd, buzz about Amazon’s impending layoffs brought the reality of the moment back to earth.
A region that rose with Microsoft and then Amazon is now dealing with the consequences of Big Tech’s AI-era restructuring. Companies that hired by the thousands are now thinning their ranks in the name of efficiency and focus — a dose of corporate realism for the local tech economy.
The double-edged nature of this shift is not lost on Washington Gov. Bob Ferguson.
“AI, and the future of AI, and what that means for our state and the world — each day I do this job, the more that moves up in my mind in terms of the challenges and the opportunities we have,” Ferguson told the AI Week crowd. He touted Washington’s concentration of AI jobs, saying his goal is to maximize the benefits of AI while minimizing its downsides.
Gov. Bob Ferguson addresses the AI Week opening reception. (GeekWire Photo / Todd Bishop)
Seattle AI Week, led by the Washington Technology Industry Association, was started last year after a Forbes list of the nation’s top 50 AI startups included none from Seattle, said the WTIA’s Nick Ellingson, opening this year’s event. That didn’t seem right. Was it a messaging problem?
“A bunch of us got together and said, let’s talk about all the cool things happening around AI in Seattle, and let’s expand the tent beyond just tech things that are happening,” Ellingson explained.
So maybe that’s the best measuring stick: how many startups will this latest shakeout spark, and how can the Seattle region’s startup and tech leaders make it happen? Can the region become less dependent on the whims of the Microsoft and Amazon C-suites in the process?
“Washington has so much opportunity. It’s one of the few capitals of AI in the world,” said WTIA’s Arry Yu in her opening remarks. “People talk about China, people talk about Silicon Valley — there are a few contenders, but really, it’s here in Seattle. … The future is built on data, on powerful technology, but also on community. That’s what makes this place different.”
And yet, “AI is a sleepy scene in Seattle, where people work at their companies, but there’s very little activity and cross-pollinating outside of this,” said Nathan Lambert, senior research scientist with the Allen Institute for AI, during the opening panel discussion.
No, we don’t want to become San Francisco or Silicon Valley, Lambert added. But that doesn’t mean the region can’t cherry-pick some of the ingredients that put Bay Area tech on top.
Whether laid-off tech workers will start their own companies is a common question after layoffs like this. In the Seattle region at least, that outcome has been more fantasy than reality.
This is where AI could change things, if not with the fabled one-person unicorn then with a bigger wave of new companies born of this employment downturn. Who knows, maybe one will even land on that elusive Forbes AI 50 list. (Hey, a region can dream!)
But as the new AI reality unfolds in the regional workforce, maybe the best question to ask is whether Seattle’s next big thing can come from its own backyard again.
Sam Altman and OpenAI announced a new deal with Microsoft, setting revised terms for future AI development. (GeekWire File Photo / Todd Bishop)
Microsoft and OpenAI announced the long-awaited details of their new partnership agreement Tuesday morning — with concessions on both sides that keep the companies aligned but not in lockstep as they move into their next phases of AI development.
Under the arrangement, Microsoft gets a 27% equity stake in OpenAI’s new for-profit entity, the OpenAI Group PBC (Public Benefit Corporation), a stake valued at approximately $135 billion. That’s a decrease from 32.5% equity but not a bad return on an investment of $13.8 billion.
At the same time, OpenAI has contracted to purchase an incremental $250 billion in Microsoft Azure cloud services. However, in a significant concession in return for that certainty, Microsoft will no longer have a “right of first refusal” on new OpenAI cloud workloads.
Microsoft, meanwhile, will retain its intellectual property rights to OpenAI models and products through 2032, an extension of the timeframe that existed previously.
A key provision of the new agreement centers on Artificial General Intelligence (AGI), with any declaration of AGI by OpenAI now subject to verification by an independent expert panel. This was a sticking point in the earlier partnership agreement, with an ambiguous definition of AI potentially triggering new provisions of the prior arrangement.
Microsoft and OpenAI had previously announced a tentative agreement without providing details. More aspects of the deal are disclosed in a joint blog post from the companies.
Shares of Microsoft are up 2% in early trading after the announcement. The company reports earnings Wednesday afternoon, and some analysts have said the uncertainty over the OpenAI arrangement has been impacting Microsoft’s stock.
Amazon CEO Andy Jassy has been pushing to reduce bureaucracy across the company. (GeekWire Photo / Todd Bishop)
Amazon confirmed Tuesday that it is cutting about 14,000 corporate jobs, citing a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.
In a message to employees, posted on the company’s website, Amazon human resources chief Beth Galetti signaled that the cutbacks are expected to continue into 2026, while indicating that the company will also continue to hire in key strategic areas.
Reuters reported Monday that the number of layoffs could ultimately total as many as 30,000 people, which is still a possibility as the cutbacks continue into next year. At that scale, the overall number of job cuts could eventually be the largest in Amazon’s history, exceeding the 27,000 positions that the company eliminated in 2023 across multiple rounds of layoffs.
“This generation of AI is the most transformative technology we’ve seen since the Internet, and it’s enabling companies to innovate much faster than ever before,” wrote Galetti, senior vice president of People Experience and Technology. Amazon needs “to be organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business,” she explained.
Amazon’s corporate workforce numbered around 350,000 people in early 2023, the last time the company provided a public number. At that scale, the initial reduction of 14,000 represents about 4% of Amazon’s corporate workforce. However, the number is a much smaller fraction of its overall workforce of 1.55 million people, which includes workers in its warehouses.
Although the cuts are expected to be global, they are likely to hit especially hard in the Seattle region, home to the company’s first headquarters and its largest corporate workforce. The tech hub has already felt the impact of major layoffs by Microsoft and many other companies in recent months.
The cuts come two days before Amazon’s third quarter earnings report. Amazon and other cloud giants have been pouring billions into capital expenses to boost AI capacity. Cutting jobs is one way of showing operating-expense discipline to Wall Street.
In a memo to employees in June, Amazon CEO Andy Jassy wrote that he expected Amazon’s total corporate workforce to get smaller over time as a result of efficiency gains from AI.
Jassy took over as Amazon CEO from founder Jeff Bezos in mid-2021. In recent years he has been pushing to reduce management layers and eliminate bureaucracy inside the company, saying he wants Amazon to operate like the “world’s largest startup.”
Bloomberg News reported this week that Jassy has told colleagues that parts of the company remain “unwieldy” despite the 2023 layoffs and other efforts to streamline operations.
Reuters cited sources saying the magnitude of the cuts is also a result of Amazon’s strict return-to-office policy failing to cause enough employees to quit voluntarily. Amazon brought workers back five days a week earlier this year.
Impacted teams and people will be notified of the layoffs today, Galetti wrote.
Amazon is offering most impacted employees 90 days to find a new role internally, though the timing may vary based on local laws, according to the message. Those who do not find a new position at Amazon or choose to leave will be offered severance pay, outplacement services, health insurance benefits, and other forms of support.
GeekWire’s Todd Bishop tries Amazon’s new smart delivery glasses in a simulated demo.
SAN FRANCISCO — Putting on Amazon’s new smart delivery glasses felt surprisingly natural from the start. Despite their high-tech components and slightly bulky design, they were immediately comfortable and barely heavier than my normal glasses.
Then a few lines of monochrome green text and a square target popped up in the right-hand lens — reminding me that these were not my regular frames.
Occupying just a portion of my total field of view, the text showed an address and a sorting code: “YLO 339.” As I learned, “YLO” represented the yellow tote bag where the package would normally be found, and “339” was a special code on the package label.
My task: find the package with that code. Or more precisely, let the glasses find them.
Amazon image from a separate demo, showing the process of scanning packages with the new glasses.
As soon as I looked at the correct package label, the glasses recognized the code and scanned the label automatically. A checkmark appeared on a list of packages in the glasses.
Then an audio alert played from the glasses: “Dog on property.”
When all the packages were scanned, the tiny green display immediately switched to wayfinding mode. A simple map appeared, showing my location as a dot, and the delivery destination marked with pins. In this simulation, there were two pins, indicating two stops.
After putting the package on the doorstep, it was time for proof of delivery. Instead of reaching for a phone, I looked at the package on the doorstep and pressed a button once on the small controller unit —the “compute puck” — on my harness. The glasses captured a photo.
With that, my simulated delivery was done, without ever touching a handheld device.
In my very limited experience, the biggest concern I had was the potential to be distracted — focusing my attention on the text in front of my eyes rather than the world around me. I understand now why the display automatically turns off when a van is in motion.
But when I mentioned that concern to the Amazon leaders guiding me through the demo, they pointed out that the alternative is looking down at a device. With the glasses, your gaze is up and largely unobstructed, theoretically making it much easier to notice possible hazards.
Beyond the fact that they’re not intended for public release, that simplicity is a key difference between Amazon’s utilitarian design and other augmented reality devices — such as Meta Ray-Bans, Apple Vision Pro, and Magic Leap — which aim to more fully enhance or overlay the user’s environment.
One driver’s experience
KC Pangan, who delivers Amazon packages in San Francisco and was featured in Amazon’s demo video, said wearing the glasses has become so natural that he barely notices them.
Pangan has been part of an Amazon study for the past two months. On the rare occasions when he switches back to the old handheld device, he finds himself thinking, “Oh, this thing again.”
“The best thing about them is being hands-free,” Pangan said in a conversation on the sidelines of the Amazon Delivering the Future event, where the glasses were unveiled last week.
Without needing to look down at a handheld device, he can keep his eyes up and stay alert for potential hazards. With another hand free, he can maintain the all-important three points of contact when climbing in or out of a vehicle, and more easily carry packages and open gates.
The glasses, he said, “do practically everything for me” — taking photos, helping him know where to walk, and showing his location relative to his van.
While Amazon emphasizes safety and driver experience as the primary goals, early tests hint at efficiency gains, as well. In initial tests, Amazon has seen up to 30 minutes of time savings per shift, although execs cautioned that the results are preliminary and could change with wider testing.
KC Pangan, an Amazon delivery driver in San Francisco who has been part of a pilot program for the new glasses. (GeekWire Photo / Todd Bishop)
Regulators, legislators and employees have raised red flags over new technology pushing Amazon fulfillment and delivery workers to the limits of human capacity and safety. Amazon disputes this premise, and calls the new glasses part of a larger effort to use technology to improve safety.
Using the glasses will be fully optional for both its Delivery Service Partners (DSPs) and their drivers, even when they’re fully rolled out, according to the company. The system also includes privacy features, such as a hardware button that allows drivers to turn off all sensors.
For those who use them, the company says it plans to provide the devices at no cost.
Despite the way it may look to the public, Amazon doesn’t directly employ the drivers who deliver its packages in Amazon-branded vans and uniforms. Instead, it contracts with DSPs, ostensibly independent companies that hire drivers and manage package deliveries from inside Amazon facilities.
With the introduction of smart glasses and other tech initiatives, including a soon-to-be-expanded training program, Amazon is deepening its involvement with DSPs and their drivers — potentially raising more questions about who truly controls the delivery workforce.
From ‘moonshot’ to reality
The smart glasses, still in their prototype phase, trace their origins to a brainstorming session about five years ago, said Beryl Tomay, Amazon’s vice president of transportation.
Each year, the team brainstorms big ideas for the company’s delivery system. During one of those sessions, a question emerged: What if drivers didn’t have to interact with any technology at all?
“The moonshot idea we came up with was, what if there was no technology that the driver had to interact with — and they could just follow the physical process of delivering a package from the van to the doorstep?” Tomay said in an interview. “How do we make that happen so they don’t have to use a phone or any kind of tech that they have to fiddle with?”
Beryl Tomay, Amazon’s vice president of transportation, introduces the smart glasses at Amazon’s Delivering the Future event. (GeekWire Photo / Todd Bishop)
That question led the team to experiment with different approaches before settling on glasses. It seemed kind of crazy at first, Tomay said, but they soon realized the potential to improve safety and the driver experience. Early trials with delivery drivers confirmed the theory.
“The hands-free aspect of it was just kind of magical,” she said, summing up the reaction from early users.
The project has already been tested with hundreds of delivery drivers across more than a dozen DSPs. Amazon plans to expand those trials in the coming months, with a larger test scheduled for November. The goal is to collect more feedback before deciding when the technology will be ready for wider deployment.
Typically, Amazon would have kept a new hardware project secret until later in its development. But Reuters reported on the existence of the project nearly a year ago. (The glasses were reportedly code-named “Amelia,” but they were announced without a name.) And this way, Amazon can get more delivery partners involved, get input, and make improvements.
Future versions may also expand the system’s capabilities, using sensors and data to automatically recognize potential hazards such as uneven walkways.
How the technology works
Amazon’s smart glasses are part of a system that also includes a small wearable computer and a battery, integrated with Amazon’s delivery software and vehicle systems.
The lenses are photochromatic, darkening automatically in bright sunlight, and can be fitted with prescription inserts. Two cameras — one centered, one on the left — support functions such as package scanning and photo capture for proof of delivery.
A built-in flashlight switches on automatically in dim conditions, while onboard sensors help the system orient to the driver’s movement and surroundings.
Amazon executive Viraj Chatterjee and driver KC Pangan demonstrate the smart glasses.
The glasses connect by a magnetic wire to a small controller unit, or “compute puck,” worn on the chest of a heat-resistant harness. The controller houses the device’s AI models, manages the visual display, and handles functions such as taking a delivery photo. It also includes a dedicated emergency button that connects drivers directly to Amazon’s emergency support systems.
On the opposite side of the chest, a swappable battery keeps the system balanced and running for a full route. Both components are designed for all-day comfort — the result, Tomay said, of extensive testing with drivers to ensure that wearing the gear feels natural when they’re moving around.
Connectivity runs through the driver’s official Amazon delivery phone via Bluetooth, and through the vehicle itself using a platform called “Fleet Edge” — a network of sensors and onboard computing modules that link the van’s status to the glasses.
This connection allows the glasses to know precisely when to activate, when to shut down, and when to sync data. When a van is put in park, the display automatically activates, showing details such as addresses, navigation cues, and package information. When the vehicle starts moving again, the display turns off — a deliberate safety measure so drivers never see visual data while driving.
Data gathered by the glasses plays a role in Amazon’s broader mapping efforts. Imagery and sensor data feed into “Project Wellspring,” a system that uses AI to better model the physical world. This helps Amazon refine maps, identify the safest parking spots, pinpoint building entrances, and optimize walking routes for future deliveries.
Amazon says the data collection is done with privacy in mind. In addition to the driver-controlled sensor shut-off button, any imagery collected is processed to “blur or remove personally identifiable information” such as faces and license plates before being stored or used.
The implications go beyond routing and navigation. Conceivably, the same data could also lay the groundwork for greater automation in Amazon’s delivery network over time.
Testing the delivery training
In addition to trying the glasses during the event at Amazon’s Delivery Station in Milpitas, Calif., I experienced firsthand just how difficult the job of delivering packages can be.
GeekWire’s Todd Bishop uses an Amazon training program that teaches drivers to walk safely on slippery surfaces.
Strapped into a harness for a slip-and-fall demo, I learned how easily a driver can lose footing on slick surfaces if not careful to walk properly.
I tried a VR training device that highlighted hidden hazards like pets sleeping under tires and taught me how to navigate complex intersections safely.
My turn in the company’s Rivian van simulator proved humbling. Despite my best efforts, I ran red lights and managed to crash onto virtual sidewalks.
GeekWire’s Todd Bishop after a highly unsuccessful attempt to use Amazon’s driving simulator.
The simulator, known as the Enhanced Vehicle Operation Learning Virtual Experience (EVOLVE), has been launched at Amazon facilities in Colorado, Maryland, and Florida, and Amazon says it will be available at 40 sites by the end of 2026.
It’s part of what’s known as the Integrated Last Mile Driver Academy (iLMDA), a program available at 65 sites currently, which Amazon says it plans to expand to more than 95 delivery stations across North America by the end of 2026.
“Drivers are autonomous on the road, and the amount of variables that they interact with on a given day are countless,” said Anthony Mason, Amazon’s director of delivery training and programs, who walked me through the training demos. One goal of the training, he said, is to give drivers a toolkit to pull from when they face challenging situations.
Suffice it to say, this is not the job for me. But if Amazon’s smart glasses live up to the company’s expectations, they might be a step forward for the drivers doing the real work.
From empty offices in 2020 to AI colleagues in 2025, the way we work has been completely rewired over the past five years. Our guest on this week’s GeekWire Podcast studies these changes closely along with her colleagues at Microsoft.
As Stallbaumer explains in the book, the five-year period starting with the pandemic and continuing to the current era of AI represents one continuous transformation in the way we work, and it’s not over yet.
“Change is the only constant—shifting norms that once took decades to unfold now materialize in months or weeks,” she writes. “As we look to the next five years, it’s nearly impossible to imagine how much more work will change.”
Listen below for our conversation, recorded on Microsoft’s Redmond campus. Subscribe on Apple or Spotify, and continue reading for key insights from the conversation.
The ‘Hollywood model’ of teams: “What we’re seeing is this movement in teams, where we’ll stand up a small squad of people who bring their own domain expertise, but also have AI added into the mix. They come together just like you would to produce a film. A group of people comes together to produce a blockbuster, and then you disperse and go back to your day job.”
The concept of the ‘frontier firm’: “They’re not adding AI as an ingredient. AI is the business model. It’s the core. And these frontier firms can have a small number of people using AI in this way, generating a pretty high run rate. So it’s a whole new way to think about shipping, creating, and innovating.”
The fallacy of ‘AI strategy’: “The idea that you just need to have an ‘AI strategy’ is a bit of a fallacy. Really, you kind of want to start with the business problem and then apply AI. … Where are you spending the most and where do you have the biggest challenges? Those are great areas to actually think about putting AI to work for you.”
Adapting to AI: “You have to build the habit and build the muscle to work in this new way and have that moment of, ‘Oh, wait, I don’t actually need to do this.’ “
The biggest risk related to AI: “The biggest risk is not AI in and of itself. It’s that people won’t evolve fast enough with AI. It’s the human risk and ability to actually start to really use these new tools and build the habit.”
Human creativity and AI: “It still takes that spark and that seed of creativity. And then when you combine it with these new tools, that’s where I have a lot of hope and optimism for what people are going to be able to do and invent in the future.”
Tye Brady, chief technologist for Amazon Robotics, introduces “Project Eluna,” an AI model that assists operations teams, during Amazon’s Delivering the Future event in Milpitas, Calif. (GeekWire Photo / Todd Bishop)
SAN FRANCISCO — Amazon showed off its latest robotics and AI systems this week, presenting a vision of automation that it says will make warehouse and delivery work safer and smarter.
But the tech giant and some of the media at its Delivering the Future event were on different planets when it came to big questions about robots, jobs, and the future of human work.
The backdrop: On Tuesday, a day before the event, The New York Times cited internal Amazon documents and interviews to report that the company plans to automate as much as 75% of its operations by 2033. According to the report, the robotics team expects automation to “flatten Amazon’s hiring curve over the next 10 years,” allowing it to avoid hiring more than 600,000 workers even as sales continue to grow.
In a statement cited in the article, Amazon said the documents were incomplete and did not represent the company’s overall hiring strategy.
On stage at the event, Tye Brady, chief technologist for Amazon Robotics, introduced the company’s newest systems — Blue Jay, a setup that coordinates multiple robotic arms to pick, stow, and consolidate items; and Project Eluna, an agentic AI model that acts as a digital assistant for operations teams.
Later, he addressed the reporters in the room: “When you write about Blue Jay or you write about Project Eluna … I hope you remember that the real headline is not about robots. The real headline is about people, and the future of work we’re building together.”
Amazon’s new “Blue Jay” robotic system uses multiple coordinated arms to pick, stow, and consolidate packages inside a fulfillment center — part of the company’s next generation of warehouse automation. (Amazon Photo)
He said the benefits for employees are clear: Blue Jay handles repetitive lifting, while Project Eluna helps identify safety issues before they happen. By automating routine tasks, he said, AI frees employees to focus on higher-value work, supported by Amazon training programs.
Brady coupled that message with a reminder that no company has created more U.S. jobs over the past decade than Amazon, noting its plan to hire 250,000 seasonal workers this year.
His message to the company’s front-line employees: “These systems are not experiments. They’re real tools built for you, to make your job safer, smarter, and more rewarding.”
‘Menial, mundane, and repetitive’
Later, during a press conference, a reporter cited the New York Times report, asking Brady if he believes Amazon’s workforce could shrink on the scale the paper described based on the internal report.
Brady didn’t answer the question directly, but described the premise as speculation, saying it’s impossible to predict what will happen a decade from now. He pointed instead to the past 10 years of Amazon’s robotics investments, saying the company has created hundreds of thousands of new jobs — including entirely new job types — while also improving safety.
He said Amazon’s focus is on augmenting workers, not replacing them, by designing machines that make jobs easier and safer. The company, he added, will continue using collaborative robotics to help achieve its broader mission of offering customers the widest selection at the lowest cost.
In an interview with GeekWire after the press conference, Brady said he sees the role of robotics as removing the “menial, mundane, and repetitive” tasks from warehouse jobs while amplifying what humans do best — reasoning, judgment, and common sense.
“Real leaders,” he added, “will lead with hope — hope that technology will do good for people.”
When asked whether the company’s goal was a “lights-out” warehouse with no people at all, Brady dismissed the idea. “There’s no such thing as 100 percent automation,” he said. “That doesn’t exist.”
Tye Brady, chief technologist for Amazon Robotics, speaks about the company’s latest warehouse automation and AI initiatives during the Delivering the Future event. (GeekWire Photo / Todd Bishop)
Instead, he emphasized designing machines with real utility — ones that improve safety, increase efficiency, and create new types of technical jobs in the process.
When pressed on whether Amazon is replacing human hands with robotic ones, Brady pushed back: “People are much more than hands,” he said. “You perceive the environment. You understand the environment. You know when to put things together. Like, people got it going on. It’s not replacing a hand. That’s not the right way to think of it. It’s augmenting the human brain.”
Brady pointed to Amazon’s new Shreveport, La., fulfillment center as an example, saying the highly automated facility processes orders faster than previous generations while also adding about 2,500 new roles that didn’t exist before.
“That’s not a net job killer,” he said. “It’s creating more job efficiency — and more jobs in different pockets.”
The New York Times report offered a different view of Shreveport’s impact on employment. Describing it as Amazon’s “most advanced warehouse” and a “template for future robotic fulfillment centers,” the article said the facility uses about 1,000 robots.
Citing internal documents, the Times reported that automation allowed Amazon to employ about 25% fewer workers last year than it would have without the new systems. As more robots are added next year, it added, the company expects the site to need roughly half as many workers as it would for similar volumes of items under previous methods.
Wall Street sees big savings
Analysts, meanwhile, are taking the potential impact seriously. A Morgan Stanley research note published Wednesday — the same day as Amazon’s event and in direct response to the Times report — said the newspaper’s projections align with the investment bank’s baseline analysis.
Rather than dismissing the report as speculative, Morgan Stanley’s Brian Nowak treated the article’s data points as credible. The analysts wrote that Amazon’s reported plan to build around 40 next-generation robotic warehouses by 2027 was “in line with our estimated slope of robotics warehouse deployment.”
More notably, Morgan Stanley put a multi-billion-dollar price tag on the efficiency gains. Its previous models estimated the rollout could generate $2 billion to $4 billion in annual savings by 2027. But using the Times’ figure — that Amazon expects to “avoid hiring 160,000+ U.S. warehouse employees by ’27” — the analysts recalculated that the savings could reach as much as $10 billion per year.
Back at the event, the specific language used by Amazon executives aligned closely with details in the Times report about the company’s internal communications strategy.
According to the Times, internal documents advised employees to avoid terms such as “automation” and “A.I.” and instead use collaborative language like “advanced technology” and “cobots” — short for collaborative robots — as part of a broader effort to “control the narrative” around automation and hiring.
On stage, Brady’s remarks closely mirrored that approach. He consistently framed Amazon’s robotics strategy as one of augmentation, not replacement, describing new systems as tools built for people.
In the follow-up interview, Brady said he disliked the term “artificial intelligence” altogether, preferring to refer to the technology simply as “machines.”
“Intelligence is ours,” he said. “Intelligence is a very much a human thing.”
A detailed explanation of this week’s Amazon Web Services outage, released Thursday morning, confirms that it wasn’t a hardware glitch or an outside attack but a complex, cascading failure triggered by a rare software bug in one of the company’s most critical systems.
The company said a “faulty automation” in its internal systems — two independent programs that began racing each other to update records — erased key network entries for its DynamoDB database service, triggering a domino effect that temporarily broke many other AWS tools.
AWS said it has turned off the flawed automation worldwide and will fix the bug before bringing it back online. The company also plans to add new safety checks and improve how quickly its systems recover if something similar happens again.
Amazon apologized and acknowledged the widespread disruption caused by the outage.
“While we have a strong track record of operating our services with the highest levels of availability, we know how critical our services are to our customers, their applications and end users, and their businesses,” the company said, promising to learn from the incident.
The outage began early Monday and impacted sites and online services around the world, again illustrating the internet’s deep reliance on Amazon’s cloud and showing how a single failure inside AWS can quickly ripple across the web.
The optional new “Mico” persona is derived from the Microsoft Copilot name.
Microsoft is rolling out a series of updates to its consumer Copilot AI assistant, including shared group chats, long-term memory, and an optional visual persona named Mico.
New capabilities include a “real talk” conversation style, a Learn Live feature that acts as a voice-enabled Socratic tutor, new connectors that link to services like Google Drive, Gmail, and Outlook, and deeper integration with Microsoft’s Edge browser.
Microsoft is competing against AI tools including Google’s Gemini, Amazon’s Alexa, Apple’s revamped Siri, OpenAI’s ChatGPT, and Anthropic’s Claude in the consumer market.
It looks to be the single biggest Copilot update to date from the group led by Mustafa Suleyman, the Google DeepMind co-founder who joined Microsoft last year as its AI CEO.
“This release is a milestone for what AI can deliver,” Suleyman writes in a blog post, explaining that the idea is to make Copilot a comprehensive assistant that connects users to their personal information, contacts, and tools with the goal of improving their lives.
The features are rolling out starting today in the U.S, and the company says they will be available soon in the UK, Canada, and other parts of the world. Microsoft is showing the new features in the live stream below.
Diego Oppenheimer, Seattle-based entrepreneur and investor, with his AI assistant “Actionary,” a personal project. (Photo via Oppenheimer)
Every Friday at 5 p.m., Diego Oppenheimer gets an email that remembers his week better than he does. It pulls from his calendar, meeting transcripts, and inbox to figure out what really mattered: decisions made, promises to keep, and priorities for the week ahead.
“It gives me a superpower,” said Oppenheimer, a machine-learning entrepreneur best known as the co-founder of Algorithmia, who’s now working with startups as an investor in Seattle.
What’s notable is that Oppenheimer didn’t buy this tool off the shelf — he built it. What started as a personal experiment turned into a challenge: could he still code after years away from writing production software?
With the rise of AI-powered coding assistants, he realized he could pick up where he left off. His personal project, with the unglamorous name “Actionary,” has grown to somewhere around 40,000 lines of what he jokingly calls vibe-coded “spaghetti.” It’s messy but functional.
Oppenheimer’s do-it-yourself AI assistant is more than a novelty. It’s a window into a broader shift. Individuals and companies are starting to hand off pieces of judgment and workflow to autonomous systems — software that analyzes data, makes recommendations, and acts independently.
Exploring the agentic frontier
This emerging frontier is the subject of Agents of Transformation, a new GeekWire editorial series exploring the people, companies, and ideas behind the rise of AI agents. A related event is planned for Seattle in early 2026. This independent project is underwritten by Accenture.
For this first installment, we spoke with startup founders and DIY builders working to replicate different aspects of the work of great executive assistants — coordinating calendars, managing travel, and anticipating needs — to see how close AI agents are getting to the human standard.
The consensus: today’s agents excel at narrow, well-defined tasks — but struggle with broader human judgment. Attempts to create all-purpose digital assistants often run up against the limits of current AI models.
T.A. McCann of Pioneer Square Labs.
“I might have my travel agent and my finance agent and my stock trading agent and my personal health coach agent and my home chef agent, etc.,” said T.A. McCann, a Seattle-based serial entrepreneur and managing director at Pioneer Square Labs, on a recent GeekWire Podcast episode.
McCann foresees these narrow agents handling discrete tasks, potentially coordinated by higher-level AI acting like a personal chief operating officer.
But even the term “AI agent” is up for debate. Oppenheimer defines a true agent as one with both autonomy and independent decision-making. By that standard, his system doesn’t quite qualify. It’s more a network of models completing tasks on command than a self-directed entity.
“If you asked a marketing department, they would say, absolutely, this is fully agentic,” he said. “But if I stick to my AI nerd cred, is there autonomous decision-making? Not really.”
It’s part of a much larger trend. The market for AI workplace assistants is projected to grow from $3.3 billion this year to more than $21 billion by 2030. according to MarketsandMarkets. Growth is being driven both by enterprise giants such as Microsoft and Salesforce embedding agents into workplace software, and by startups building specialized agents.
A report by the newsletter “CCing My EA,” citing an ASAP survey, notes that 26% of EAs now use AI tools. Some fear job loss due to AI, but most top EAs see AI as an augmentation tool that frees time for strategic work.
From summaries to scheduling
ReadAI CEO David Shim (Read AI Photo)
One company exploring this emerging frontier is Read AI, a Seattle-based startup known for its cross-platform AI meeting summarization and analysis technologies, which has raised more than $80 million in funding.
Co-founder and CEO David Shim revealed that Read AI has been internally developing and piloting an AI executive assistant called “Ada” for tasks including scheduling meetings and responding to emails.
Ada replies so quickly that Read AI has been working on building in a delay into the email response time so that it seems more natural to the recipients.
Shim has been personally testing the limits of the technology — giving Ada access to a range of workplace data (from Outlook, Teams, Slack, JIRA, and other cloud services) and letting the assistant autonomously answer questions about Read AI’s business that come in from the company’s investors in response to his periodic updates.
“It answers questions that I would not have the answer to right off the bat, because it’s not just pulling from my data set, but it’s pulling in from my team’s data set,” Shim said during a fireside chat with GeekWire co-founder John Cook at a recent Accenture reception.
Shim laughed, “I’m willing to take that risk. We’re doing well, so I don’t mind giving out the data.”
However, there are limitations. Ada can struggle with complex multi-person scheduling or tasks requiring data it can’t access, and can still occasionally hallucinate. To manage this, ReadAI incorporates human oversight mechanisms like “sidebars” where Ada asks for confirmation before sending replies to messages deemed more sensitive or difficult.
Shim argues against the idea of building a single, all-encompassing agent.
“The approach of agents doing everything is not the right approach,” he said. “If you try to do everything, you’re not going to do anything well.”
Instead, he believes successful AI assistants will focus on solving very specific problems, much like Google Maps gives driving directions without trying to be a general travel agent.
The “book-me-a-hotel” challenge
Travel is a use case that’s close to the heart of Brad Gerstner, founder and CEO of Altimeter Capital. Gerstner is known for backing some of the biggest names in tech — from Snowflake to Expedia — and for distilling big tech shifts into simple tests, such as his hotel booking challenge.
The specific example he gave at the 2024 Madrona IA Summit in Seattle was telling an AI agent to book the Mercer Hotel in New York on a specific day at the lowest price — a common challenge for business travelers.
“Until we can do that, we have not built a personal assistant,” he said.
That’s part of the larger problem Michael Gulmann, a former Expedia product executive, set out to solve with the startup Otto, which is developing an AI agent specifically for business travelers.
As shown publicly for the first time at this year’s Madrona conference, Otto tackled Gerstner’s specific challenge. After receiving the request to book the Mercer Hotel on a specific day, it found the cheapest available room, confirmed the price and details, and completed the booking, with minimal prompting, within about two minutes.
“Who would have thought that Brad Gerstner wanted the cheapest room?” Gullman joked.
Michael Gulmann demos Otto at the 2025 Madrona IA Summit. (GeekWire Photo / Todd Bishop)
Otto handles various aspects of travel. It understands and learns detailed user preferences — from specific amenities like rooftop bars to preferred airline seats, hotel room types, and loyalty programs — using this knowledge to refine searches and make personalized recommendations.
As Gulmann explained in an interview, Otto doesn’t use a single monolithic model. It coordinates a bunch of narrow agents: one to interpret messages, another to manage loyalty programs, another to handle payments. Together they simulate a small operations team working behind the scenes.
Otto confirms details with the user before completing purchases, even though it could do that autonomously. Gulmann described that precaution as psychological, not technical — knowing that most people aren’t yet comfortable with AI buying things without their involvement.
After learning about Otto’s capabilities, Gerstner was impressed and wanted to see how it performs as it moves into public beta, said Mike Fridgen, a venture partner at Madrona, which incubated the company.
The grand challenge of scheduling
If hotel booking is the acid test for autonomous assistants, scheduling meetings is the everyday nightmare.
That’s the problem Howie is trying to solve. The Seattle startup’s AI assistant lives in the email inbox. CC Howie on a thread, and it proposes times, confirms with all parties, creates invites, and adds meeting links.
Howie works from a detailed “preferences document,” inspired by how experienced executives train their human EAs — which cafés are acceptable for meetings, how late is too late on Fridays, etc.
The company recently launched publicly with $6 million in funding and a growing number of paying customers. It uses a hybrid model: AI supported by human reviewers. That helps avoid the tiny errors that destroy trust — mixing up time zones, dropping a name from a thread, or misreading social cues.
The system simulates decisions internally, flags potential errors for review, and escalates anything ambiguous to a human before hitting send.
“If you think about the things that a great human EA does, software is not replacing that anytime soon,” said Howie co-founder Austin Petersmith.
In fact, Petersmith said, many of Howie’s users are human EAs themselves, using it to offload logistics. “Nobody wants to do scheduling,” he said. “Everybody wants the machines to take this particular task on.”
As models improve, Petersmith hopes Howie can expand into other “meta-work” — the administrative overhead that keeps knowledge workers from the higher-value activities that are still the realm of humans.
More time in the day
For Diego Oppenheimer, this isn’t a hypothetical issue. “I’m extremely calendar dyslexic,” he explained. “I’ll triple-book myself. I’ll agree to go to places I shouldn’t be. I’ll travel to the wrong city. Really bad.”
Over the years, he relied on human EAs and a chief of staff to keep him on track. But when he stepped back from running a company full-time, hiring someone just to manage his complex, multi-role calendar no longer made sense. So he built Actionary to help. It sends the Friday recap to catch him up on the week, flagging issues right before his weekend “reboot.”
Oppenheimer’s project won the People’s Choice Award at an AI Tinkerers event in New York last month. But he is very clear: Actionary is a personal project, not a product in the making. He developed it for himself, and can’t imagine taking on the headache of feature requests and technical support from others.
He’s bullish on the larger trend, and a user and investor in tools like Howie. But he also recognizes that AI agents can’t match the comprehensive skills and judgment of a human EA, let alone a chief of staff in a higher-level strategic role.
Oppenheimer’s ultimate goal is more straightforward, but still ambitious. “I’m trying to make time in the day,” he said. “That’s what I’m trying to do.”
GeekWire’s Todd Bishop reported and wrote this article with editing assistance from AI tools including Gemini and a custom OpenAI GPT trained in GeekWire’s editorial approach. All facts, quotes, and conclusions were reviewed and verified prior to publication.
Amazon’s new augmented reality glasses for delivery drivers are currently in testing. (Screenshot from Amazon video.)
MILPITAS, Calif. — Amazon is bringing delivery details directly to drivers’ eyeballs.
The e-commerce giant on Wednesday confirmed that it’s developing new augmented reality glasses for delivery drivers, using AI and computer vision to help them scan packages, follow turn-by-turn walking directions, and capture proof of delivery, among other features.
Amazon says the goal is to create a hands-free experience, making the job safer and more seamless by reducing the need for drivers to look down at a device.
Scenarios shown by the company make it clear that the devices activate after parking, not while driving, which could help to alleviate safety and regulatory concerns.
[Update, Oct. 23: Amazon executives said in briefings Wednesday that the glasses will be fully optional for drivers, and they’re designed with a hardware-based privacy button. This switch, located on the device’s controller, allows drivers to turn off all sensors, including the camera and microphone.
From a customer perspective, the company added that any personally identifiable information, such as faces or license plates, will be blurred to protect privacy.
Overall, Amazon is positioning the glasses as a tool to improve safety and the driver’s experience. We had a chance to try the glasses first-hand this week, and we’ll have more in an upcoming post.]
The wearable system was developed with input from hundreds of drivers, according to the company. It includes a small controller worn on the driver’s vest that houses operational controls, a swappable battery for all-day use, and a dedicated emergency button.
The AR glasses overlay delivery information on the real world. (Screenshot from Amazon video.)
The glasses are also designed to support prescription and transitional lenses. Amazon says future versions could provide real-time alerts for hazards, like pets in the yard, or notify a driver if they are about to drop a package at the wrong address.
According to Amazon, the smart glasses are an early prototype, currently in preliminary testing with hundreds of drivers in North America. The company says it’s gathering driver feedback to refine the technology before planning a broader rollout.
The announcement at Amazon’s Delivering the Future event in the Bay Area today confirms a report by The Information last month. That report also said Amazon is developing consumer AR glasses to compete with Facebook parent Meta’s AI-powered Ray Ban smart glasses.
The enterprise AR market is in flux, with early mover Microsoft pivoting away from HoloLens hardware, creating an opening for players like Magic Leap and Vancouver, Wash.-based RealWear.
A demo video released by Amazon shows a delivery driver using augmented reality (AR) glasses throughout their workflow. It begins after the driver parks in an electric Rivian van, where the glasses overlay the next delivery address directly onto a view of the road.
“Dog on property,” the audio cue cautions the driver.
Upon parking, the driver moves to the cargo area. The AR display then activates to help with sorting, with green highlights overlaid on the specific packages required for that stop. As the driver picks each item, it’s scanned and a virtual checklist in their vision gets updated.
After retrieving all the packages from the cargo hold, the driver begins walking to the house. The glasses project a digital path onto the ground, guiding them along the walkway to the front door.
Once at the porch, the display prompts the driver to “Take photo” to confirm the delivery. After placing the items, the driver taps a chest-mounded device to take the picture. A final menu then appears, allowing the driver to “Tap to finish” the stop before heading back to the van.
Ring founder and Amazon exec Jamie Siminoff’s book, Ding Dong: How Ring Went From Shark Tank Reject to Everyone’s Front Door, is due out Nov. 10. (Courtesy Photo)
Jamie Siminoff has lived the American Dream in many ways — recovering from an unsuccessful appearance on Shark Tank to ultimately sell smart doorbell company Ring to Amazon for a reported $1 billion in 2018.
“I never set out to write a book, but after a decade of chaos, failure, wins, and everything in between, I realized this is a story worth telling,” Siminoff said in the announcement, describing Ding Dong as the “raw, true story” of building Ring, including nearly running out of money multiple times.
He added, “My hope is that it gives anyone out there chasing something big a little more fuel to keep going. Because sometimes being ‘too dumb to fail’ is exactly what gets you through.”
Siminoff rejoined the Seattle tech giant earlier this year after stepping away in 2023. He’s now vice president of product, overseeing the company’s home security camera business and related devices including Ring, Blink, Amazon Key, and Amazon Sidewalk.
San Francisco Mayor Daniel Lurie speaks at an Amazon event at the San Francisco-Marin Food Bank. (GeekWire Photo / Todd Bishop)
SAN FRANCISCO — Facing renewed threats of federal intervention from President Trump, Mayor Daniel Lurie used an appearance at an Amazon event Tuesday to make the case that San Francisco is “on the rise,” citing its AI-fueled revival as proof of a broader comeback.
Without naming Trump or explicitly citing the proposal to deploy the National Guard, Lurie pushed back on the national narrative of urban decline — pointing to falling crime rates, new investment, and the city’s central role in the AI boom.
Lurie, who took office earlier this year, said San Francisco is “open for business” again, name-checking OpenAI and other prominent companies in the city as examples of the innovation fueling its recovery. Mayors of other cities, he said, would die to have one of the many AI companies based in San Francisco.
“Every single metric is heading in the right direction,” Lurie said, noting that violent crime is at its lowest level since the 1950s and car break-ins are at a 22-year low, among other stats.
He was speaking at the San Francisco-Marin Food Bank, as Amazon hosted journalists from around the country and the world on the eve of its annual Delivering the Future event, where the company shows its latest robotics and logistics innovations.
“I want you to tell everybody, wherever you come from, that San Francisco’s on the rise,” he said. “You tell them there’s a new mayor in town, that we’ve got this, and we do.”
Amazon and leaders of San Francisco-Marin Food Bank highlighted their partnership that uses the company’s delivery network to bring food to community members who can’t get to a pantry. The company said Tuesday it has delivered more than 60 million meals for free from food banks across the US and UK, committing to continue the program through 2028.
A New York Times report on Tuesday, citing internal Amazon documents, said the company wants to automate 75% of its operations in the coming years to be able to avoid hiring hundreds of thousands of workers. It noted that the company is looking at burnishing its image through community programs to counteract the long-term fallout.
Executives noted that Amazon has focused in the Seattle region on affordable housing, in line with its approach of adapting to different needs in communities where it operates.
Lurie pointed to the company’s San Francisco food bank partnership as a model for other companies. “Amazon is showing that they are committed to San Francisco,” he said.
Microsoft CEO Satya Nadella speaks at the company’s 50th anniversary event. (GeekWire Photo / Kevin Lisota)
Microsoft CEO Satya Nadella’s total 2025 compensation rose nearly 22% from $79.1 million to almost $96.5 million, due mostly to the company’s booming share price boosting the value of his stock awards.
The numbers were disclosed Tuesday afternoon in the company’s annual proxy statement, along with details on Microsoft board changes, shareholder proposals raising concerns about AI risks, and a request from the board for shareholders to approve a new stock plan.
Microsoft laid off more than 15,000 employees this year — one of the most aggressive rounds of cuts in its history — citing shifting priorities and the need for efficiency amid record spending on AI infrastructure. Wall Street reacted positively to the effort to rein in operating expenses.
Much of Nadella’s total compensation — about $84.2 million — is based on the performance of the company’s stock, which has risen more than 23% in the past year, at one point pushing Microsoft’s total market value briefly past $4 trillion.
Also announced in the proxy: Microsoft’s board nominated Walmart CFO John David Rainey as a new board member, to replace Carlos Rodriguez, current chair of the compensation committee, who is not seeking re-election.
The company’s 2025 fiscal year ended June 30. In evaluating Nadella’s performance, the board cited his work leading the expansion of the company’s AI infrastructure, Microsoft Copilot adoption and new security initiatives.
Microsoft chart, see 2025 proxy for footnotes and more information. (Click to enlarge.)
His cash incentive bonus was $9.56 million, up from the $5.2 million paid in 2024, after he requested a reduction. The proxy statement said the increase reflected strong financial results (117% of target) and a high operational assessment (151.67% of target).
For the first time, security was used as one measuring stick for Microsoft executive compensation, part of an effort by the company to appease regulators and lawmakers after a series of high-profile breaches. In its review, the board focused on Nadella’s role in attempting to address these issues through the implementation of its Secure Future Initiative.
In addition, Microsoft’s board is asking shareholders to approve a 2026 Stock Plan to replace the expiring 2017 plan, requesting authorization for up to 226 million new shares that it says is needed to continue granting equity awards for attracting and retaining talent.
Nadella recently appointed veteran executive Judson Althoff as CEO of Microsoft’s commercial business, a move designed to free Nadella to focus more intensely on long-term AI strategy and technology.
Microsoft’s annual meeting, held virtually, is slated for 8:30 a.m. Dec. 5.