Reading view

Google: Exact match keywords won’t block broad match in AI Max

Why phrase match is losing ground to broad match in Google Ads

Ginny Marvin, Google’s Ads Liaison, is clarifying how keyword match types interact with AI Overviews (AIO) and AI Mode ad placements — addressing ongoing confusion among advertisers testing AI Max and mixed match-type setups.

Why we care. As ads expand into AI-powered placements, advertisers need to understand which keywords are eligible to serve — and when — to avoid unintentionally blocking reach or misreading performance.

Back in May. Responding to questions from Marketing Director Yoav Eitani, Marvin confirmed that an ad can serve either above or below an AI Overview or within the AI Overview — but not both in the same auction:

  • “Your ad could trigger to show either above/below AIO or within AIO, but not both at this time.” Marvin confirmed.

While both exact and broad match keywords can be eligible to trigger ads above or below AIO, only broad match keywords (or keywordless targeting) are eligible to trigger ads within AI Overviews.

What’s changed. In a follow-up exchange with Paid Search specialist Toan Tran, Marvin clarified that Google has updated how eligibility works. Previously, the presence of an exact match keyword could prevent a broad match keyword from serving in AI Overviews. That is no longer the case.

  • “The presence of the same keyword in exact match will not prevent the broad match keyword from triggering an ad in an AI Overview, since the exact match keyword is not eligible to show Ads in AI Overviews and hence not competing with the broad match keyword.” Marvin said.

Since exact and phrase match keywords are not eligible for AI Overview placements, they do not compete with broad match keywords in that auction — meaning broad match can still trigger ads within AIO even when the same keyword exists as exact match.

The big picture. Google is reinforcing a clear separation between traditional keyword matching and AI-powered intent matching. Ads in AI Overviews rely on a deeper understanding of both the user query and the AI-generated content, which is why eligibility is limited to broader targeting signals.

The bottom line. Exact and phrase match keywords won’t show ads in AI Overviews — but they also won’t block broad match from doing so. For advertisers leaning into AI Max and AIO placements, broad match and keywordless strategies are now essential to unlocking reach in Google’s AI-driven surfaces.

Larian's Swen Vincke Says Divinity Will be Turn-based RPG, Hints About "Next Level" Experience

The unveiling of Divinity at this year's edition of The Game Awards was widely celebrated. Since releasing Baldur's Gate 3 back in 2023, Larian Studios has moved on from working within the constraints of Dungeons and Dragons (D&D) license. RPG enthusiasts have long-pondered over the prospect of revisiting the Divinity universe. Last week, the Belgian-headquartered development house did not reveal many details regarding their next flagship project. Their pagan ritual-filled cinematic trailer did not contain any gameplay footage. Online gaming communities have debated about Divinity's gameplay systems; will it be a real-time action experience, or a turn-based affair? Recently, Larian head honcho—Swen Vincke—sat down with Bloomberg's Jason Schreier.

In a Q&A session, the company's founder acknowledged that his team will try to build on the success of Baldur's Gate 3. This will be a big challenge, Vincke elaborated: "it's more. It's more pressure. The weight of the expectations weighs high. We're trying not to think about it, because we have to make our own thing." Larian's last in-house fantasy IP—Divinity: Original Sin II—was released back in 2017. Continuing his train of thought he opined: "Baldur's Gate 3 was a good game, and I'm proud of it, but I think this one (Divinity) is going to be way better...This is going to be us unleashed, I think. It's a turn-based RPG featuring everything you've seen from us in the past, but it's brought to the next level." Additionally, Vincke confirmed that Divinity will be released via an early access model; likely following a similar trail pioneered by BG3's staggered release roadmap.

(PR) Sucker Punch Productions Co-Founder Stepping Down, New Leadership Duo Revealed

After nearly three decades helping bring to life iconic franchises like Sly Cooper, inFAMOUS, Ghost of Tsushima, and Ghost of Yōtei, Brian Fleming has announced he's passing Sucker Punch studio leadership on to a new generation. Over the past year, Brian has worked closely with PlayStation Studios to ensure that Sucker Punch was in the best hands moving forward with a strong foundation for the studio's continued success. Starting January 1, longtime creative and technical leaders Jason Connell and Adrian Bentley will step into new roles as studio heads, continuing to guide the team's focus on ambitious, character-driven experiences that define PlayStation Studios.

New Studio Heads: Jason Connell and Adrian Bentley
Jason and Adrian have been instrumental in shaping the creative and technical direction of Sucker Punch in recent years. Jason, who served as Co-Creative Director on the Ghost franchise, has helped define the studio's visual identity and storytelling style, bringing cinematic depth and emotional resonance to PlayStation players around the world. Adrian, as Technical Director, has led the studio's engineering and production efforts, driving innovation across development tools and game systems that make Sucker Punch's worlds so immersive. Together, they represent the perfect blend of creative vision and technical excellence that has always set the studio apart.

Core Ultra 7 365 Test Sample Gets Geekbenched, Scores Indicate Generational Regression

As of this morning, a mysterious "Lenovo 4810X90100" test platform has produced slightly concerning benchmark scores. The freshly-archived database entry shows a next-gen laptop/notebook being driven by an unreleased Intel Core Ultra 7 365 "Panther Lake-H" mobile processor. Overall Geekbench 6.3 scores—2451 (single-core) and 9714 (multicore)—were swiftly noted down by keen trackers of emerging PC hardware. According to Geekbench browser data, Team Blue's upcoming mid-range SKU is an 8-core design, with two contained clusters consisting of four units (each). This leak seems to align with official Intel preview material that describes a generic product's 4 Power-core + 4 Low Power-Efficiency-core configuration.

The leaked "Panther Lake-H" part's Geekbench tallies were compared to scores generated by similarly positioned predecessors: Core Ultra 7 268V and Core Ultra 7 258V. Surprisingly, the older "Lunar Lake-H" mobile CPUs—on average—outperform the younger sibling. When weighing up the differences between Core Ultra 7 268V and Core Ultra 7 365 SKUs, Wccftech acknowledged clock speed disparities and contrasting TDP ranges. Unfortunately, the leaked "Panther Lake-H" candidate trails behind in terms single-core and multicore of scores—6% and 7%, respectively. Notebookcheck reckoned that the evaluated Core Ultra 7 365 APU offers underwhelming "12th-gen-like performance." When cross-referenced against the Core Ultra 7 258V, Intel's newer chip lagged behind in both single-core (11%) and multicore scenarios (10%). It is plausible that the "Lenovo 4810X90100" platform was set up with pre-release drivers, therefore putting its hosted Core Ultra 7 365 processor in a disadvantaged position.

Microsoft updates the Xbox Wireless Headset with Bluetooth LE Audio

In a blog post on Xbox Wire, Microsoft has announced a new firmware update for the Xbox Wireless Headset that will add Bluetooth LE Audio support for compatible Windows 11 devices. The blog post specifically mentions the ASUS ROG Xbox Ally and Xbox Ally X, but any PC with Windows 11 and Bluetooth LE/LC3 audio support will be able to take advantage of the new features. Microsoft recently announced support for super wideband stereo voice support for Windows 11 and the Xbox Wireless Headset is the first headset we're aware of that now supports that feature.

Microsoft also promises that the firmware update will bring with it lower latency—this is part of the LC3 audio codec when compared to the old SBC audio codec—and improved battery life, although the company didn't say by how much. Finally the firmware will bring with it a preview of a broadcast audio feature that will allow "Windows 11 insiders on select PCs" to share the game audio with other Bluetooth devices, but the blog post doesn't go into any details of how this will work. The Xbox Wireless Headset will also be able to take advantage of super wideband stereo audio in Windows 11, which Microsoft claims will offer better in-game audio, although for those that already own a wireless headset with a dedicated dongle, this might not bring any real world advantages. To update the Xbox Wireless Headset you either need to use the Xbox Accessories app on an Xbox console or in Windows 11 and the update only applies to the 2024 version of the headset.

NVIDIA Powers Nearly 72% of Cloud Accelerator Locations, AMD at 5.8%

The cloud accelerator market share study was conducted at UBS bank, one of the leading research hubs and close followers of market trends. According to UBS, NVIDIA commands 71.2% of the cloud accelerator location share, which includes GPUs and other types of ASICs. The study also included AMD, and other ASICs. For the AMD front, the situation is entirely different as the data shows AMD only captures 5.8% of the cloud accelerator location share. The remaining portion is split between ASICs designed by the likes of AWS, Meta, Broadcom, MediaTek, and many others, which command 22.3%.

For NVIDIA, the company has managed to penetrate its GPUs in 258 datacenter locations, serving as cloud GPU providers. This is a significant number considering that some of these customers are ordering tens and even hundreds of thousands of GPUs at once, especially as the demand expands. NVIDIA is still the backbone of the entire AI infrastructure, and even its older GPUs, which include A100 and H100, are capturing a significant part of the GPU cloud market share. As "Blackwell" SKUs, designed for a million GPU scale systems, are scaled, and we enter the "Rubin" era, NVIDIA's lead may only expand.

ChatGPT Images Unleashes Unprecedented Visual Agility

The post ChatGPT Images Unleashes Unprecedented Visual Agility appeared first on StartupHub.ai.

The latest iteration of ChatGPT Images, powered by OpenAI’s new flagship image generation model, GPT Image 1.5, signals a profound shift in the accessibility and flexibility of visual content creation. This release, demonstrated through a dynamic visual showcase, moves beyond simple image generation to offer sophisticated editing and stylistic transformations that were once the exclusive […]

The post ChatGPT Images Unleashes Unprecedented Visual Agility appeared first on StartupHub.ai.

The 2026 AI predictions: Why infrastructure will fail, but apps will fly.

The post The 2026 AI predictions: Why infrastructure will fail, but apps will fly. appeared first on StartupHub.ai.

While Big Tech faces supply chain bottlenecks and AGI timelines push into the 2030s, AI application startups are set to achieve unprecedented scale in 2026.

The post The 2026 AI predictions: Why infrastructure will fail, but apps will fly. appeared first on StartupHub.ai.

OpenAI’s new ChatGPT Images is 4x faster and more precise: Everything you need to know

The post OpenAI’s new ChatGPT Images is 4x faster and more precise: Everything you need to know appeared first on StartupHub.ai.

The new ChatGPT Images, powered by GPT Image 1.5, delivers 4x faster generation speeds and crucial improvements in editing consistency and text rendering.

The post OpenAI’s new ChatGPT Images is 4x faster and more precise: Everything you need to know appeared first on StartupHub.ai.

AI’s Unseen Cost: Political Pressure Mounts on Data Center Energy Demands

The post AI’s Unseen Cost: Political Pressure Mounts on Data Center Energy Demands appeared first on StartupHub.ai.

The burgeoning computational demands of artificial intelligence are rapidly colliding with public policy and local politics, as highlighted in a recent CNBC “Money Movers” segment. CNBC Business News TechCheck Anchor Deirdre Bosa reported on growing political pressure stemming from the massive energy consumption of AI data centers, revealing a new front of risk for the […]

The post AI’s Unseen Cost: Political Pressure Mounts on Data Center Energy Demands appeared first on StartupHub.ai.

Your Support Team Should Ship Code – Lisa Orr, Zapier

The post Your Support Team Should Ship Code – Lisa Orr, Zapier appeared first on StartupHub.ai.

Lisa Orr, Product Leader at Zapier, shared a compelling narrative about how her company is leveraging artificial intelligence to transform its support operations, enabling the support team to actively ship code. The core problem was the sheer volume of support tickets generated by API changes, overwhelming traditional support workflows. Zapier’s journey began with a clear […]

The post Your Support Team Should Ship Code – Lisa Orr, Zapier appeared first on StartupHub.ai.

Google AI Overviews surged in 2025, then pulled back: Data

Google rapidly expanded AI Overviews in search during 2025, then pulled back as they moved into commercial and navigational queries. These findings are based on a new Semrush analysis of more than 10 million keywords from January to November.

AI Overviews surged, then retreated. Google didn’t roll out AI Overviews in a straight line in 2025. A mid-year spike gave way to a pullback, suggesting Google moved fast to test the feature, then eased off based on user data:

  • January: 6.5% of queries triggered an AI Overview
  • July: AI Overview visibility peaked, appearing in just under 25% of queries.
  • November: Coverage fell back to less than 16% of queries.

Zero-click behavior defied expectations. Surprisingly, click-through rates for keywords with AI Overviews have steadily risen since January. AI Overviews don’t automatically reduce clicks and may even encourage them.

  • AI Overviews still appear more often on searches that already tend to drive no clicks.
  • But when Semrush compared the same keywords before and after an AI Overview appeared, zero-click rates fell from 33.75% to 31.53%.

Informational queries no longer dominate. Early 2025 AI Overviews were almost entirely informational:

  • January: 91% informational
  • October: 57% informational

Now, AI Overviews are appearing for commercial and transactional queries:

  • Commercial queries: Increased from 8% to 18%
  • Transactional queries: Increased from 2% to 14%

Navigational queries are rising fast. In an unexpected shift, AI summaries are increasingly intercepting brand and destination searches:

  • Navigational AI Overviews grew from under 1% in January to more than 10% by November.

Google Ads + AI Overviews. Earlier this year, ads rarely appeared next to AI Overviews. Now they’re common:

  • Ads alongside AI Overviews rose from about 3% in January to roughly 40% by November.
  • Ads show at the bottom of around 25% of AI Overview SERPs.

Science is the most impacted industry. By keyword saturation, Science leads all verticals for AI Overviews at 25.96%. Computers & Electronics follows at 17.92%, with People & Society close behind at 17.29%.

  • Since March, Food & Drink has seen the fastest growth in AI Overviews of any category.
  • Meanwhile, Real Estate, Shopping, and Arts & Entertainment remain lightly affected, with AI Overviews appearing on fewer than 3% of keywords.

Why we care. AI Overviews are unevenly and persistently reshaping click behavior, commercial visibility, and ad placement. Volatility is likely to continue, so closely monitor performance shifts tied to AI Overviews.

The report. Semrush AI Overviews Study: What 2025 SEO Data Tells Us About Google’s Search Shift

Dig deeper. In May, I reported on the original version of Semrush’s study in Google AI Overviews now show on 13% of searches: Study.

Compromised IAM Credentials Power a Large AWS Crypto Mining Campaign

An ongoing campaign has been observed targeting Amazon Web Services (AWS) customers using compromised Identity and Access Management (IAM) credentials to enable cryptocurrency mining. The activity, first detected by Amazon's GuardDuty managed threat detection service and its automated security monitoring systems on November 2, 2025, employs never-before-seen persistence techniques to hamper

Nixxes updates Ghost of Tsushima to enable FSR ML Frame Generation

Hotfix 8 for Ghost of Tsushima adds FSR ML Frame Generation to the game Nixxes Software has released its “Patch 8 Hotfix” for Ghost of Tsushima’s PC version, adding support for AMD FSR ML Frame Generation. This new Frame Generation technique is part of AMD’s FSR “Redstone” update. With this update, users of AMD’s Radeon […]

The post Nixxes updates Ghost of Tsushima to enable FSR ML Frame Generation appeared first on OC3D.

(PR) "A Game About Feeding A Black Hole" Available Now on PC & Linux Platforms

It's finally here. Celebrate A Game About Feeding A Black Hole's launch with a limited-time 30% discount on Steam. Thank you to everyone who joined us along the way. As always, keep feeding the black hole.

Feeders, Andy and I set out to build a small project along the lines of an incremental game that made financial sense for the time we'd put in. For myself, it has been a rollercoaster of excitement, stress, worry, and more. I would also describe it as a rocketship, and every time we think things are calming down...BAM! It feels like THIS: absolute chaos, want to keep going, and can't take your eyes off the moment.

Minisforum EU & UK Stores Start Selling EOP4A OCuLink External GPU Card

Minisforum's UK and Europe webstores have listed a brand-new EOP4A OCuLink external GPU card. The tiny—80 mm x 40 mm (not including bracket)—adapter's interface is advertised as offering native high-performance. According to the manufacturer's typo and error-laden promo material, this design: "delivers lossless PCIe (4.0) to OCuLink conversion," thus "unleashing the full potential of external NVMe storage and compute accelerators." The Chinese tech specialist's EOP4A model seems destined for deployment in desktop setups, with externally connected graphics card boosting performance across content creation, AI development, and gaming applications.

The item's product specification sheet mentions a TO OCuLink SF8611/8612 interface, compatible PCIe slot types (x1, x4, x8, & x16), and support for (up to)
PCIe 4.0 ×4 bandwidths. The adapter is bundled with low-profile and full-height brackets. Plug-and-play functionality—e.g, driverless operation—is a big selling point. Currently, the EOP4A is listed with a tidy discount—for prospective UK buyers, £20 has been deducted from the standard price £89 tag (including VAT). A similar launch promotion is in effect at the Minisforum EU webshop, albeit with local taxes applied at checkout. The premium constructed EOP4A can be picked up for £69 (~$93 USD) or €55 + VAT (~$65 USD). By contrast, Minisforum's North American webstore directs potential customers to a placeholder product page. At the time of writing, this listing bears no pricing or stock availability info.

FSP Launches S210 mATX Small Tower Case Series

FSP has introduced the S210, a compact microATX small tower case with a 23-liter volume. The case measures 350 × 212 × 310 mm and can accommodate full-length graphics cards up to 340 mm and CPU air coolers up to 170 mm in height. It's made of 0.7 mm SPCC steel, paired with a single tempered glass side panel. A metal mesh section covering the front and top of the case improves airflow, while a built-in carry handle adds basic portability. As with many small form factor cases, power supply support is limited to standard ATX units up to 140 mm in length. Cooling options include space for two 120 mm fans or a 240 mm radiator at the top, along with a single 120 mm rear fan.

Storage support consists of one 3.5-inch drive (shared with an SSD mount) and two dedicated 2.5-inch SSD mounts. For expansion, it features four PCIe slots while the front I/O panel is minimal, offering two USB 3.0 ports, an audio jack and a power button. The S210 supports microATX and Mini-ITX motherboards and is currently listed in black as the S210-B variant, so we can expect other color options in the future. Pricing and exact availability haven't yet been announced by FSP.

Kingston Warns SSD Shortage Will Get Worse in the Next 30 Days

The modern-day SSD business is divided into two main perspectives: some predict that NAND flash shortages will ease, while others speculate that the situation will worsen. Kingston belongs to the latter group, with the company's datacenter SSD business manager, Cameron Crandall, discussing this on The Full Nerd Network podcast. According to him, the NAND flash shortage is expected to significantly worsen in the next 30 days, leading to an increase in SSD prices from current levels. In 2025 alone, Kingston reports that NAND flash prices have surged by 246% from the first quarter until now, with 70% of that increase occurring in just the last 60 days. This indicates that many price hikes have already been integrated into the supply chain.

Edward Crisler, the PR manager for Sapphire, advised PC gamers and potential buyers not to rush into panic buying. He stated, "The good news is, I don't think the real pain we're experiencing now and for the next six months or so will last much longer than that." However, he also highlighted that the main challenge is the uncertainty of the situation. This uncertainty is causing OEMs to increase NAND flash contracts, where no one besides NAND flash manufacturers can predict what is exactly going on in the supply chain, and just how long the massive demand will outpace supply.

(PR) Worldwide Server Market Revenue Increased 61% in Q3 of 2025, According to IDC

According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, the server market reached a record $112.4 billion dollars in revenue during the third quarter of the year. This quarter showed another high double digit-growth rate by reaching a year-over-year (YoY) increase of 61% in vendor revenue compared to the same quarter of 2024. Revenue generated from x86 servers increased 32.8% in 2025Q3 to $76.3 billion while Non-x86 servers increased 192.7% YoY to $36.2 billion.

Revenue for servers with an embedded GPU in the third quarter of 2025 grew 49.4% year-over-year representing more than half of the server market revenue. The fast pace at which hyperscalers and cloud service providers have been adopting servers with embedded GPUs has fueled the server market growth which almost doubled in size compared to 2024 with revenue of $314.2 billion dollars for the first three quarters of 2025.

(PR) Team Cherry Introduces "Sea of Sorrow" - Hollow Knight: Silksong's First Expansion

Heya Everyone! With Hollow Knight: Silksong's launch year coming to a close, we wanted to take a moment and share a peek at what we've been working on, what's available now, and what's coming in 2026. But firstly, and most importantly, we wanted to say a huge thank you to all the players who've braved Silksong's distant and dangerous lands. That's over seven million of you who've purchased the game, alongside millions more playing on Xbox Game Pass!

It's a truly staggering number of players, more than we could have ever expected (enough to crash all of the storefronts!). Watching the community grow, seeing the amazing art, the mods, the unexpected strategies, and the support between players through the game's challenges has been hugely rewarding for us here at home. Your continued enthusiasm remains a massive motivator as we work towards expanding the game even further. That first big expansion is already well underway!

AMD Ryzen AI 9 465 APU Geekbenched, "Gorgon Point" Offers Negligible Performance Leap

AMD's not-yet-launched "Ryzen AI 400 series" seems to be nearing finalized form, given an uptick of leaks across the latter half of 2025. Yesterday morning, an ASUS Vivobook S 15 (M5650GA) test platform posted further "Gorgon Point" details within the Geekbench browser database. The sort of next-gen ultra-thin laptop was driven by a Ryzen AI 9 465 APU. This particular SKU also turned up via a CrossMark entry, late last month. Looking at fresh Geekbench 6.5 scores, the evaluated ASUS device achieved overall 2780 (single-core) and 12001 (multicore) digits. When compared to a direct predecessor, the unreleased mobile chip only holds an advantage in single-core stakes.

It is possible that the Ryzen AI 9 465 mobile processor was put through its paces under unfavorable conditions—e.g. running on immature drivers and firmware. The direct forebear, Team Red's readily available Ryzen AI 9 365 "Strix Point" mobile processor holds a 474 point advantage in multicore scenarios. Industry observers reckon that the "Gorgon Point" generation will deploy with improved onboard XDNA 2 NPUs, albeit combined with familiar "Strix Point" credentials. The forthcoming Ryzen AI 9 465 SKU seems to share its predecessor's base specifications: 10 cores/20 threads, maximum 5.0 GHz boost/turbo clock, and a Radeon 880M iGPU. Amusingly, Geekbench has identified the leaked part as belonging to the "Strix Point" mobile family—it is widely believed that "Gorgon Point" will emerge as a mild refresh over the "previous" generation.

Rec'd – Discover places you'll love through people you trust


Rec'd is a social discovery platform, turning trusted social signals into personalised recommendations. Right now people use multiple apps to discover places, save them, verify them and book. Rec'd integrates this process into one, powerful, AI based app that lets people discover the way they want, saving into one clean and intelligent platform.

View startup

New 1.4nm nanoimprint lithography template could reduce the need for EUV steps in advanced process nodes — questions linger as no foundry has yet committed to nanoimprint lithography for high-volume manufacturing

Japan’s Dai Nippon Printing (DNP) claims to have developed a nanoimprint lithography template capable of patterning logic with a feature size of 1.4nm, with plans for mass production in 2027.

AI Liberation: Unlocking Potential Beyond “Security Theater”

The post AI Liberation: Unlocking Potential Beyond “Security Theater” appeared first on StartupHub.ai.

The prevailing narrative around artificial intelligence often centers on the race for capability, but a recent discussion on the Latent Space podcast unveiled a contrasting, equally vital perspective: the imperative of liberation and radical transparency in AI development. Pliny the Liberator, renowned for his “universal jailbreaks” that dismantle the guardrails of frontier models, and John […]

The post AI Liberation: Unlocking Potential Beyond “Security Theater” appeared first on StartupHub.ai.

Greylock’s Enduring Legacy: People, Principles, and the AI Frontier

The post Greylock’s Enduring Legacy: People, Principles, and the AI Frontier appeared first on StartupHub.ai.

Greylock Partners, a venture capital firm celebrating its 60th anniversary this year, offers a compelling study in enduring success through relentless adaptation and an unwavering commitment to core principles. In a recent episode of Uncapped with Jack Altman, General Partner Saam Motamedi, one of Greylock’s youngest partners, delved into the foundational elements that have allowed […]

The post Greylock’s Enduring Legacy: People, Principles, and the AI Frontier appeared first on StartupHub.ai.

What We Learned Deploying AI within Bloomberg’s Engineering Organization – Lei Zhang, Bloomberg

The post What We Learned Deploying AI within Bloomberg’s Engineering Organization – Lei Zhang, Bloomberg appeared first on StartupHub.ai.

“The reality of applying AI at scale inside a mature engineering organization is far more complex and nuanced,” stated Lei Zhang, Head of Technology Infrastructure Engineering at Bloomberg, during a recent discussion. Zhang, speaking about Bloomberg’s extensive experience integrating AI into the workflows of over 9,000 software engineers, offered a candid look at the practical […]

The post What We Learned Deploying AI within Bloomberg’s Engineering Organization – Lei Zhang, Bloomberg appeared first on StartupHub.ai.

The AI Memory Wars Heat Up Around Video

The post The AI Memory Wars Heat Up Around Video appeared first on StartupHub.ai.

Video has become a dominant signal on the internet. It powers everything from Netflix’s $82 billion Warner Bros Discovery acquisition to the sensor streams feeding warehouse robots and city surveillance grids. Yet beneath this sprawl, AI systems are hitting a wall in that they can tag clips and rank highlights, but they struggle to remember […]

The post The AI Memory Wars Heat Up Around Video appeared first on StartupHub.ai.

The enterprise blueprint for winning visibility in AI search

The enterprise blueprint for winning visibility in AI search

We are navigating the “search everywhere” revolution – a disruptive shift driven by generative AI and large language models (LLMs) that is reshaping the relationship between brands, consumers, and search engines.

For the last two decades, the digital economy ran on a simple exchange: content for clicks. 

With the rise of zero-click experiences, AI Overviews, and assistant-led research, that exchange is breaking down.

AI now synthesizes answers directly on the SERP, often satisfying intent without a visit to a website. 

Platforms such as Gemini and ChatGPT are fundamentally changing how information is discovered. 

For enterprises, visibility increasingly depends on whether content is recognized as authoritative by both search engines and AI systems.

That shift introduces a new goal – to become the source that AI cites.

A content knowledge graph is essential to achieving that goal. 

By leveraging structured data and entity SEO, brands can build a semantic data layer that enables AI to accurately interpret their entities and relationships, ensuring continued discoverability in this evolving economy.

This article explores:

  • The difference between traditional search and AI search, including the concept of comprehension budget.
  • Why schema and entity optimization are foundational to discovery in AI search.
  • The content knowledge graph and the importance of organizational entity lineage.
  • The enterprise entity optimization playbook and deployment checklist.
  • The role of schema in the agentic web.
  • How connected journeys improve customer discovery and total cost of ownership.

The fundamental difference between traditional and AI search

To become a source that AI cites, it’s essential to understand how traditional search differs from AI-driven search.

Traditional search functioned much like software as a service. 

It was deterministic, following fixed, rule-based logic and producing the same output for the same input every time.

AI search is probabilistic. 

It generates responses based on patterns and likelihoods, which means results can vary from one query to the next. 

Even with multimodal content, AI converts text, images, and audio into numerical representations that capture meaning and relationships rather than exact matches.

For AI to cite your content, you need a strong data layer combined with context engineering – structuring and optimizing information so AI can interpret it as reliable and trustworthy for a given query.

As AI systems rely increasingly on large-scale inference rather than keyword-driven indexing, a new reality has emerged: the cost of comprehension. 

Each time an AI model interprets text, resolves ambiguity, or infers relationships between entities, it consumes GPU cycles, increasing already significant computing costs.

A comprehension budget is the finite allocation of compute that determines whether content is worth the effort for an AI system to understand.

4 foundational elements for AI discovery

For content to be cited by AI, it must first be discovered and understood. 

While many discovery requirements overlap with traditional search, key differences emerge in how AI systems process and evaluate content.

AI discovery - foundational elements

1. Technical foundation

Your site’s infrastructure must allow AI engines to crawl and access content efficiently. 

With limited compute and a finite comprehension budget, platform architecture matters. 

Enterprises should support progressive crawling of fresh content through IndexNow integration to optimize that budget.

Ideally, this capability is native to the platform and CMS.

2. Helpful content

Before creating content, you need an entity strategy that accurately and comprehensively represents your brand. 

Content should meet audience needs and answer their questions. 

Structuring content around customer intent, presenting it in clear “chunks,” and keeping it fresh are all important considerations.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

3. Entity optimization

Schema markup, clean information architecture, consistent headings, and clear entity relationships help AI engines understand both individual pages and how multiple pieces of content relate to one another. 

Rather than forcing models to infer what a page is about, who it applies to, or how information connects, businesses make those relationships explicit.

4. Authority

AI engines, like traditional search engines, prioritize authoritative content from trusted sources. 

Establishing topical authority is essential. For location-based businesses, local relevance and authority are also critical to becoming a trusted source.

The myth: Schema doesn’t work

Many enterprises claim to use schema but see no measurable lift, leading to the belief that schema doesn’t work. 

The reality is that most failures stem from basic implementations or schema deployed with errors.

Tags such as Organization or Breadcrumb are foundational, but they provide limited insight into a business. 

Used in isolation, they create disconnected data points rather than a cohesive story AI can interpret.

The content knowledge graph: Telling AI your story

The more AI knows about your business, the better it can cite it. 

A content knowledge graph is a structured map of entities and their relationships, providing reliable information about your business to AI systems.

Deep nested schema plays a central role in building this graph.

entity-lineage-for-deep-nested-schema

A deep nested schema architecture expresses the full entity lineage of a business in a machine-readable form.

In resource description framework (RDF) terms, AI systems need to understand that:

  • An organization creates a brand.
  • The brand manufactures a product.
  • The product belongs to a category.
  • Each category serves a specific purpose or use case.

By fully nesting entities – Organization → Brand → Product → Offer → PriceSpecification → Review → Person – you publish a closed-loop content knowledge graph that models your business with precision.

Dig deeper: 8 steps to a successful entity-first strategy for SEO and content

Get the newsletter search marketers rely on.


The enterprise entity optimization playbook

In “How to deploy advanced schema at scale,” I outlined the full process for effective schema deployment – from developing an entity strategy through deployment, maintenance, and measurement.

Automating for operational excellence

At the enterprise level, facts change constantly, including product specifications, availability, categories, reviews, offers, and prices. 

If structured data, entity lineage, and topic clusters do not update dynamically to reflect these changes, AI systems begin to detect inconsistencies.

In an AI-driven ecosystem where accuracy, coherence, and consistency determine inclusion, even small discrepancies can erode trust.

Manual schema management is not sustainable.

The only scalable approach is automation – using a schema management solution aligned with your entity strategy and integrated into your discovery and marketing flywheel.

Measuring success: KPIs for the generative AI era

As keyword rankings lose relevance and traffic declines, you need new KPIs to evaluate performance in AI search.

  • Brand visibility: Is your brand appearing in AI search results?
  • Brand sentiment: When your brand is cited, is the sentiment positive, negative, or neutral?
  • LLM visibility: Beyond branded queries, how does your performance on non-branded terms compare with competitors?
  • Conversions: At the bottom of the funnel, are conversion metrics being tracked and optimized?

Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026

From reading to acting: Preparing for the agentic web

The web is shifting from a “read” model to an “act” model.

AI agents will increasingly execute tasks on behalf of users, such as booking appointments, reserving tables, or comparing specifications.

To be discovered by these agents, brands must make their capabilities machine-callable. Key steps to prepare include:

  • Create a schema layer: Define entity lineage and executable capabilities in a machine-readable format so agents can act on your behalf.
  • Use action vocabularies: Leverage Schema.org action vocabularies to provide semantic meaning and define agent capabilities, including:
    • ReserveAction.
    • BookAction.
    • CommunicateAction.
    • PotentialAction.
  • Establish guardrails: Declare engagement rules, required inputs, authentication, and success or failure semantics in a structured format that machines can interpret.

Brands that are callable are the ones that will be found. Acting early provides a compounding advantage by shaping the standards agents learn first.

The enterprise entity deployment checklist

Use this checklist to evaluate whether your entity strategy is operational, scalable, and aligned with AI discovery requirements.

  • Entity audit: Have you defined your core entities and validated the facts?
  • Deep nesting: Does your JSON-LD reflect your business ontology, or is it flat?
  • Authority linking: Are you using sameAs to connect entities to Wikidata and the Knowledge Graph?
  • Actionable schema: Have you implemented PotentialAction for the agentic web?
  • Automation: Do you have a system in place to prevent schema drift?
  • Single source of truth (SSOT): Is schema synchronized across your CMS, GBP, and internal systems?
  • Technical SEO: Are the technical foundations in place to support an effective entity strategy?
  • IndexNow: Are you enabling progressive and rapid indexing of fresh content?

Connected customer journeys and total cost of ownership

connected-customer-discovery-flywheel

Your martech stack must align with the evolving customer discovery journey. 

This requires a shift from treating schema as a point solution for visibility to managing a holistic presence with total cost of ownership in mind.

Data is the foundation of any composable architecture. 

A centralized data repository connects technologies, enables seamless flow, breaks down departmental silos, and optimizes cost of ownership.

This reduces redundancy and improves the consistency and accuracy AI systems expect.

When schema is treated as a point solution, content changes can break not only schema deployment but the entire entity lineage. 

Fixing individual tags does not restore performance. Instead, multiple teams – SEO, content, IT, and analytics – are pulled into investigations, increasing cost and inefficiency.

The solution is to integrate schema markup directly into brand and entity strategy.

When structured content changes, it should be:

  • Revalidated against the organization’s entity lineage.
  • Dynamically redeployed.
  • Pushed for progressive indexing through IndexNow.

This enables faster recovery and lower compute overhead.

Integrating schema into your entity lineage and discovery flywheel helps optimize total cost of ownership while maximizing efficiency.

A strategic blueprint for AI readiness

Several core requirements define AI readiness.

ai-ready-enterprise-strategy
  • Data: Centralized, unified, consistent, and reliable data aligned to customer intent is the foundation of any AI strategy.
  • Connected journeys and composable architecture: When data is unified and structured with schema, customer journeys can be connected across channels. A composable martech stack enables consistent, personalized experiences at every touchpoint.
  • Structured content: Define organizational entity lineage and create a semantic layer that makes content machine- and agent-ready.
  • Distribution: Break down silos and move from channel-specific tactics to an omnichannel strategy, supported by a centralized data source and progressive crawling of fresh content.

Together, these efforts make your omnichannel strategy more durable while reducing total cost of ownership across the technology stack.

Thanks to Bill Hunt and Tushar Prabhu for their contributions to this article.

When Google’s AI bidding breaks – and how to take control

When Google’s AI bidding breaks – and how to take control

Google’s pitch for AI-powered bidding is seductive.

Feed the algorithm your conversion data, set a target, and let it optimize your campaigns while you focus on strategy. 

Machine learning will handle the rest.

What Google doesn’t emphasize is that its algorithms optimize for Google’s goals, not necessarily yours. 

In 2026, as Smart Bidding becomes more opaque and Performance Max absorbs more campaign types, knowing when to guide the algorithm – and when to override it – has become a defining skill that separates average PPC managers from exceptional ones.

AI bidding can deliver spectacular results, but it can also quietly destroy profitable campaigns by chasing volume at the expense of efficiency. 

The difference is not the technology. It is knowing when the algorithm needs direction, tighter constraints, or a full override.

This article explains:

  • How AI bidding actually works.
  • The warning signs that it is failing.
  • The strategic intervention points where human judgment still outperforms machine learning.

How AI bidding actually works – and what Google doesn’t tell you

Smart Bidding comes in several strategies, including:

Each uses machine learning to predict the likelihood of a conversion and adjust bids in real time based on contextual signals.

The algorithm analyzes hundreds of signals at auction time, such as:

  • Device type.
  • Location.
  • Time of day.
  • Browser.
  • Operating system.
  • Audience membership.
  • Remarketing lists.
  • Past site interactions.
  • Search query.

It compares these signals with historical conversion data to calculate an optimal bid for each auction.

During the “learning period,” typically seven to 14 days, the algorithm explores the bid landscape, testing bid levels to understand the conversion probability curve. 

Google recommends patience during this phase, and in general, that advice holds. The algorithm needs data.

The first problem is that learning periods are not always temporary. 

Some campaigns get stuck in perpetual learning and never achieve stable performance.

Dig deeper: When to trust Google Ads AI and when you shouldn’t

Google’s optimization goals vs. your business goals

The algorithm optimizes for metrics that drive Google’s revenue, not necessarily your profitability.

When a Target ROAS of 400% is set, the algorithm interprets that as “maximize total conversion value while maintaining a 400% average ROAS.” 

Notice the word “maximize.”

The system is designed to spend the full budget and, ideally, encourage increases over time. 

More spend means more revenue for Google.

Business goals are often different. 

You may want a 400% ROAS with a specific volume threshold. 

You may need to maintain margin requirements that vary by product line. 

Or you may prefer a 500% ROAS at lower volume because fulfillment capacity is constrained.

The algorithm does not understand this context. 

It sees a ROAS target and optimizes accordingly, often pushing volume at the expense of efficiency once the target is reached.

This pattern is common. An algorithm increases spend by 40% to deliver 15% more conversions at the target ROAS. Technically, it succeeds. 

In practice, cash flow cannot support the higher ad spend, even at the same efficiency. 

The algorithm does not account for working capital constraints.

Key signals the algorithm can’t understand

AI bidding works well, but it has limits. 

Without intervention, several factors can’t be fully accounted for.

Seasonal patterns not yet reflected in historical data

Launch a campaign in October, and the algorithm has no visibility into a December peak season.

It optimizes based on October performance until December data proves otherwise, often missing early seasonal demand.

Product margin differences

A $100 sale of Product A with a 60% margin and a $100 sale of Product B with a 15% margin look identical to the algorithm. 

Both register as $100 conversions. The business impact, however, is very different. 

This is where profit tracking, profit bidding, and margin-based segmentation matter.

Customer lifetime value variations

Unless lifetime value modeling is explicitly built into conversion values, the algorithm treats a first-time customer the same as a repeat buyer. 

In most accounts, that modeling does not exist.

Market and competitive changes

When a competitor launches an aggressive promotion or a new entrant appears, the algorithm continues bidding based on historical conditions until performance degrades enough to force adjustment. 

Market share is often lost during that lag.

Inventory and supply chain constraints

If a best-selling product is out of stock for two weeks, the algorithm may continue bidding aggressively on related searches because of past performance. 

The result is paid traffic that cannot convert.

This is not a criticism of the technology. It’s a reminder that the algorithm optimizes only within the data and parameters provided. 

When those inputs fail to reflect business reality, optimization may be mathematically correct but strategically wrong.

Warning signs your AI bidding strategy is failing

The perpetual learning phase

Learning periods are normal. Extended learning periods are red flags.

If your campaign shows a “Learning” status for more than two weeks, something is broken. 

Common causes include:

  • Insufficient conversion volume – the algorithm typically needs at least 30 to 50 conversions per month.
  • Frequent changes that reset the learning period.
  • Unstable performance with wide day-to-day fluctuations.

When to intervene

If learning extends beyond three weeks, either:

  • Increase the budget to accelerate data collection.
  • Loosen the target to allow more conversions.
  • Or switch to a less aggressive bid strategy like Enhanced CPC. 

Sometimes the algorithm is simply telling you it does not have enough data to succeed.

Budget pacing issues

Healthy AI bidding campaigns show relatively smooth budget pacing. 

Daily spend fluctuates, but it stays within reasonable bounds. 

Problematic patterns include:

  • Front-loaded spending – 80% of the daily budget gone by 10 a.m.
  • Consistent underspending, such as averaging 60% of budget per day.
  • Volatile day-to-day swings, like spending $800 one day, $200 the next, then $650 after that.

Budget pacing is a proxy for algorithm confidence. 

Smooth pacing suggests the system understands your conversion landscape. 

Erratic pacing usually means it is guessing.

The efficiency cliff

This is the most dangerous pattern. Performance starts strong, then gradually or suddenly deteriorates.

This shows up often in Target ROAS campaigns. 

  • Month 1: 450% ROAS, excellent. 
  • Month 2: 420%, still good. 
  • Month 3: 380%, concerning. 
  • Month 4: 310%, alarm bells.

What happened? 

The algorithm exhausted the most efficient audience segments and search terms. 

To keep growing volume – because it is designed to maximize – it expanded into less qualified traffic. 

Broad match reached further. Audiences widened. Bid efficiency declined.

Traffic quality deterioration

Sometimes the numbers look fine, but qualitative signals tell a different story. 

  • Engagement declines – bounce rate rises, time on site falls, pages per session drop. 
  • Geographic shifts appear as the algorithm drives traffic from lower-value regions. 
  • Device mix changes, often skewing toward mobile because CPCs are cheaper, even when desktop converts better. 
  • Time-of-day misalignment can also emerge, with traffic arriving when sales teams are unavailable.

These quality signals do not directly influence optimization because they are not part of the conversion data. 

To address them, the algorithm needs constraints: bid adjustments, audience exclusions, or ad scheduling.

The search terms report reveals the truth

The search terms report is the truth serum for AI bidding performance. 

Export it regularly and look for:

  • Low-intent queries receiving aggressive bids.
  • Informational searches mixed with transactional ones.
  • Irrelevant expansions where the algorithm chased conversions into entirely different intent.

A high-end furniture retailer should not spend $8 per click on “free furniture donation pickup.” 

A B2B software company targeting “project management software” should not appear for “project manager jobs.” 

These situations occur when the algorithm operates without constraints. 

Keyword matching is also looser than it was in the past, which means even small gaps can allow the system to bid on queries you never intended to target.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

Get the newsletter search marketers rely on.


Strategic intervention points: When and how to take control

Segmentation for better control

One-size-fits-all AI bidding breaks down when a business has diverse economics. 

The solution is segmentation, so each algorithm optimizes toward a clear, coherent goal.

Separate high-margin products – 40%+ margin – into one campaign with more aggressive ROAS targets, and low-margin products – 10% to 15% margin – into another with more conservative targets. 

If the Northeast region delivers 450% ROAS while the Southeast delivers 250%, separate them. 

Brand campaigns operate under fundamentally different economics than nonbrand campaigns, so optimizing both with the same algorithm and target rarely makes sense.

Segmentation gives each algorithm a clear mission. Better focus leads to better results.

Bid strategy layering

Pure automation is not always the answer. 

In many cases, hybrid approaches deliver better results.

  • Run Target ROAS at 400% under normal conditions, then manually lower it to 300% during peak season to capture more volume when demand is high. 
  • Use Maximize Conversion Value with a bid cap if unit economics cannot support bids above $12. 
  • Group related campaigns under a portfolio Target ROAS strategy so the algorithm can optimize across them. 
  • For campaigns with limited conversion data or volatile performance, Enhanced CPC offers algorithmic assistance without full black box automation.

The hybrid approach

The most effective setups combine AI bidding with manual control campaigns.

Allocate 70% of the budget to AI bidding campaigns, such as Target ROAS or Maximize Conversion Value, and 30% to Enhanced CPC or manual CPC campaigns. 

Manual campaigns act as a baseline. If AI underperforms manual by more than 20% after 90 days, the algorithm is not working for the business.

Use tightly controlled manual campaigns to capture the most valuable traffic – brand terms and high-intent keywords – while AI campaigns handle broader prospecting and discovery. 

This approach protects the core business while still exploring growth opportunities.

COGS and cart data reporting (plus profit optimization beta)

Google now allows advertisers to report cost of goods sold, or COGS, and detailed cart data alongside conversions. 

This is not about bidding yet, but seeing true profitability inside Google Ads reporting.

Most accounts optimize for revenue, or ROAS, not profit. 

A $100 sale with $80 in COGS is very different from a $100 sale with $20 in COGS, but standard reporting treats them the same. 

With COGS reporting in place, actual profit becomes visible, dramatically improving the quality of performance analysis.

To set it up, conversions must include cart-level parameters added to existing tracking. 

These typically include item ID, item name, quantity, price, and, critically, the cost_of_goods_sold parameter for each product.

Google is testing a bid strategy that optimizes for profit instead of revenue. 

Access is limited, but advertisers with clean COGS data flowing into Google Ads can request entry. 

In this model, bids are optimized around actual profit margins rather than raw conversion value. 

This is especially powerful for retailers with wide margin variation across products.

For advertisers without access to the beta, a custom margin-tracking pixel can be implemented manually. It is more technical to set up, but it achieves the same outcome.

Dig deeper: Margin-based tracking: 3 advanced strategies for Google Shopping profitability

When AI bidding actually works

AI bidding works best when the fundamentals are in place: 

  • Sufficient conversion volume.
  • A stable business model with consistent margins and predictable seasonality.
  • Clean conversion tracking.
  • Enough historical data to support learning.

In these conditions, AI bidding often outperforms manual management by processing more signals and making more granular optimizations than humans can execute at scale.

This tends to be true in:

  • Mature ecommerce accounts.
  • Lead generation programs with consistent lead values.
  • SaaS models with predictable trial-to-paid conversion paths.

When those conditions hold, the role shifts.

Bid management gives way to strategic oversight – monitoring trends, identifying expansion opportunities, and testing new structures.

The algorithm then handles tactical optimization.

Preparing for AI-first advertising

Google is steadily reducing advertiser control under the banner of automation. 

  • Performance Max has absorbed Smart Shopping and Local campaigns. 
  • Asset groups replace ad groups. 
  • Broad match becomes mandatory in more contexts. 
  • Negative keywords increasingly function as suggestions the system may or may not honor.

For advertisers with complex business models or specific strategic goals, this loss of granularity creates tension. 

You are often asked to trust the algorithm even when business context suggests a different decision.

That shift changes the role. You are no longer a bid manager. 

You are an AI strategy director who:

  • Defines objectives.
  • Provides business context.
  • Sets constraints.
  • Monitors outcomes.
  • Intervenes when the system drifts away from strategic intent.

No matter how advanced AI bidding becomes, certain decisions still require human judgment. 

Strategic positioning – which markets to enter and which product lines to emphasize – cannot be automated. 

Neither can creative testing, competitive intelligence, or operational realities like inventory constraints, margin requirements, and broader business priorities.

This is not a story of humans versus AI. It is humans directing AI.

Dig deeper: 4 times PPC automation still needs a human touch

Master the algorithm, don’t serve it

AI-powered bidding is the most powerful optimization tool paid media has ever had. 

When conditions are right – sufficient data, a stable business model, and clean tracking – it delivers results manual management cannot match.

But it is not magic.

The algorithm optimizes for mathematical targets within the data you provide. 

If business context is missing from that data, optimization can be technically correct and strategically wrong. 

If markets change faster than the system adapts, performance erodes. 

If your goals diverge from Google’s revenue incentives, the algorithm will pull in directions that do not serve the business.

The job in 2026 is not to blindly trust automation or stubbornly resist it. 

It is to master the algorithm – knowing when to let it run, when to guide it with constraints, and when to override it entirely.

The strongest PPC leaders are AI directors. They do not manage bids. They manage the system that manages bids.

Rogue NuGet Package Poses as Tracer.Fody, Steals Cryptocurrency Wallet Data

Cybersecurity researchers have discovered a new malicious NuGet package that typosquats and impersonates the popular .NET tracing library and its author to sneak in a cryptocurrency wallet stealer. The malicious package, named "Tracer.Fody.NLog," remained on the repository for nearly six years. It was published by a user named "csnemess" on February 26, 2020. It masquerades as "Tracer.Fody,"

(PR) Ubisoft Absorbs March of Giants IP & Dev Team, Formerly an Amazon Montreal Game

Ubisoft has announced that it has acquired March of Giants, formerly developed by Amazon's Montreal Games studio. The team will join Ubisoft to continue development of the free-to-play title, and is currently working on the next major update. March of Giants puts players in some big shoes, placing them in the role of a giant combatant on a ravaged urban battlefield inspired by the technology of the early 1900s. The 4v4 tactical multiplayer online battle arena (MOBA) sees players fighting through lanes, commanding battalions of comparatively pint-sized enemies, and deploying Battleworks—such as trenches, tanks, turrets, and bunkers—to eliminate the enemy team. The gameplay combines elements of real-time strategy and lane-based combat with large-scale coordination and mastery of the giant heroes that make up its roster.

The team behind the game consists of a group of veteran developers, led by two former Ubisoft team members with ties to some of the company's biggest franchises. Alex Parizeau, Senior Production lead, is the former Managing Director of Ubisoft Toronto, having overseen several major AAA productions. Xavier Marquis, Creative Director, was the original Creative Director behind Rainbow Six Siege. Both bring their wealth of experience, passion, and expertise to the development of this new title.

Beelink ME Pro Teaser Showcases DIY Swappable AMD, Arm, & Intel Board System

Beelink has started marketing a slick-looking ME Pro NAS system; past weekend updates—mostly aimed at the Chinese market—contain only scant details. Western audiences were treated to a 35-second-long teaser trailer that shows off some of the next-gen device's internals. The Shenzhen-based manufacturer advertises the ME Pro model as a NAS/mini PC hybrid—the standard version seems to accommodate two drives. In the recent past, Beelink has hinted about a larger 4-bay variant. According to the company's Weibo blog, the ME Pro's unique selling point (USP) is: "a groundbreaking DIY drawer-style design, modularizing the motherboard and allowing you to upgrade the core components as needed, truly realizing 'My device, my control,' solving big problems with a small investment."

Beelink's promo video shows AMD, Arm, and Intel board options being swapped in and out of the compact hybrid NAS/PC system. Elaborating on Team Blue hardware, one press outlet—NAS Compares—reportedly attended a recent special preview event at Beelink's headquarters. Representatives allegedly outlined board designs that featured Intel N95 "Alder Lake-N" and N150 "Twin Lake" processors. The manufacturer has not released an ME Pro specification sheet, so it is difficult to judge the product's proportions. In a second Weibo post, Beelink's description reads as follows: "(our design) uses a unibody metal casing, replacing the traditional multi-component structure, which significantly reduces size while improving structural strength. Compared to mainstream dual-bay NAS devices, the overall size is reduced by nearly 50%, making it more suitable for desktop and home environments." A vague "coming soon" tagline could hint at an eventual proper unveiling at next month's CES trade show.

Minisforum Intros BD895i SE MoDT ITX Motherboard with Ryzen 9 8945HX

Minisforum has introduced the BD895i SE, a new MoDT (Mobile on Desktop) Mini-ITX motherboard built around AMD's Ryzen 9 8945HX mobile processor. Similar in concept to the earlier BE7XXi HX series, the BD895i SE targets compact, high-performance desktop builds by combining a flagship mobile CPU with a full-featured ITX (170 × 170 mm) layout and a large, pre-installed CPU heatsink. The board features a 16-core, 32-thread Ryzen 9 8945HX processor, running at a 2.5 GHz base clock with boost speeds of up to 5.4 GHz. The CPU is rated at a 100 W TDP, with boost power reaching up to 120 W. Minisforum included a low-profile heatsink with four heatpipes and a total height of 37 mm leaving users the choice to add their own 120 mm fan if needed.

The BD895i SE motherboard features a full PCIe 5.0 x16 slot for a dedicated graphics card, as the integrated AMD Radeon 610M is not really suitable for gaming. Then there are two DDR5-5200 SO-DIMM slots supporting up to 96 GB and dual PCIe 4.0 x4 M.2 2280 slots for NVMe storage. Another M.2 2230 E-Key slot is reserved for Wi-Fi or Bluetooth cards. In terms of connectivity, it offers HDMI 2.1, DisplayPort 1.4, and USB-C with DisplayPort Alt Mode (all up to 8K at 60 Hz or 4K at 120 Hz). A 2.5 GbE LAN port alongside a mix of USB 2.0, USB 3.2 Gen 1, and USB-C Gen 2 ports and full analog audio jacks on the rear I/O complements the specs. Power is supplied through standard ATX connectors, making the board compatible with conventional desktop PSUs. Pricing is set at around $423.90 in the US (regular price $529), while European pricing is listed at €439 (regular price €549). Minisforum BD895i SE is positioned as a compact but premium option for users looking to build a powerful small-form-factor system around AMD latest high-end mobile CPU.

(PR) Aspyr Postpones Launch of Deus Ex Remastered, Refunds Existing Pre-orders

Thank you to the community for your feedback following the reveal of Deus Ex Remastered. We've listened to what you had to say, and in order to better meet fan expectations and deliver the best possible experience for players, Deus Ex Remastered will no longer launch on Feb. 5, 2026. We look forward to sharing more updates with you when the time comes. All existing pre-orders will be fully refunded. Thank you again for your patience and support.

About Deus Ex Remastered
The year is 2052. Societies are teetering on collapse, plagues spread unchecked, and shadow governments shape the future. You are JC Denton—a nano-augmented UNATCO operative tasked with protecting order. But the deeper you dig, the more lies you uncover. What begins as a mission becomes a fracture point in human history.

Apple's "Baltra" AI Chip Reportedly Launching in 2027, Could be Inference-only Design

About a year ago, industry insiders let slip early details regarding Apple's mysterious "Baltra" AI chip. At the time, Broadcom was reportedly assisting with the design's network technology. In addition, the "Baltra" project was linked to TSMC 3 nm N3E or N3P node processes. Recent leaks suggest an adjusted gestation schedule; instead of a predicted start of "mass production by 2026," timelines seem to be extended by another year. The reasons behind this rumored delay are not clear, but the North American giant has—officially, as of October 2025—bolstered its Apple Intelligence and Private Cloud Compute-oriented data centers with US-made advanced server equipment, likely running on familiar M2 Ultra processors. Industry moles reckon that M4-based variants are being readied for distribution.

Apple's futuristic in-house "Baltra" AI setup is expected to be inference-oriented, as predicted by Max Weinbach. The tech industry analyst imagined: "basically, I doubt they'll do a massive cluster, but maybe something closer to a GB300 style, with ~64 chips all to all with larger high bandwidth LPDDR memory. Should be significantly cheaper than most current chips, and match the needs." Last month, Mark Gurman claimed that Apple had signed up for a custom Google Gemini-powered LLM. The notorious tipster believes that the claimed $1 billion per year deal will drive next-gen personalized Siri and AI web search functions. As mentioned before, Apple's in-progress "Baltra" data center chip could be designed for efficiency and cost-effectiveness, rather than the raw processing power required for training.

(PR) Arctic Announces MX-7 Thermal Interface Material

ARCTIC introduces MX-7, a new thermal paste that delivers reliable performance in both classic desktop CPU setups and demanding direct die applications on GPUs, laptops, and console processors. Thanks to a new formula, MX-7 achieves even lower thermal resistance and offers a measurable increase in performance. The thermal paste is neither electrically conductive nor capacitive, eliminating the risk of short circuits or discharges.

MX-7 is a relatively viscous thermal paste that cannot be applied with a spatula. An even layer thickness is achieved through contact pressure when mounting the cooler. The MX-7 impresses not only with its performance, but also with its new syringe design. The improved composition with high cohesiveness reduces the pump-out effect during repeated heat cycles—ideal for long-lasting performance. The Authenticity Check also allows the authenticity of each individual product to be verified.

PayPal wants to become a bank in the US


The San Jose-based firm submitted applications to both the Federal Deposit Insurance Corporation and the Utah Department of Financial Institutions to form an industrial loan company chartered in Utah. The proposed entity, to be known as PayPal Bank, would allow the company to expand its small-business lending operations and offer...

Read Entire Article

Billionaire CRE Developer Warns of Data Center Finance Risks

The post Billionaire CRE Developer Warns of Data Center Finance Risks appeared first on StartupHub.ai.

The burgeoning demand for artificial intelligence has ignited a gold rush in data center development, but for seasoned commercial real estate (CRE) billionaire Fernando De Leon, this frenetic activity bears unsettling resemblances to past market excesses. De Leon, CEO of Leon Capital Group, recently spoke with CNBC Senior Real Estate Correspondent Diana Olick on the […]

The post Billionaire CRE Developer Warns of Data Center Finance Risks appeared first on StartupHub.ai.

Lazard CEO: U.S. economy increasingly a levered bet on AI

The post Lazard CEO: U.S. economy increasingly a levered bet on AI appeared first on StartupHub.ai.

Lazard CEO Peter Orszag joined CNBC’s “Squawk Box” to discuss the current state of the economy, the impact of the AI boom, and the broader implications for businesses and employment. He articulated a bifurcated economic landscape where AI-driven sectors are experiencing significant growth, contrasted with other areas that are not seeing the same level of […]

The post Lazard CEO: U.S. economy increasingly a levered bet on AI appeared first on StartupHub.ai.

A 3-tier framework for Shopify integrations that drive conversions

A 3-tier framework for Shopify integrations that drive conversions

Shopify powers more than 6 million live ecommerce websites, supported by a robust app ecosystem that can extend nearly every part of the customer journey. 

Anyone can develop an app to perform virtually any function. 

But with so many integrations to choose from, ecommerce teams often waste time testing add-ons that promise revenue gains but fail to deliver.

Having worked across a wide range of Shopify implementations, I’ve seen which tools consistently improve checkout completion, recover abandoned carts, and increase revenue. 

Based on that experience, I’ve organized the most effective integrations into three tiers by priority – so you can implement the essentials first, then move on to more advanced optimization.

Tier 1: Mobile-first, frictionless buying

With 54.5% of holiday purchases happening on mobile, the ecommerce experience must be seamless and flexible. 

As a result, every Shopify site should have two components integrated into its storefront: 

  • A digital wallet compatibility.
  • A buy now, pay later (BNPL) option. 

Without these in place, Shopify users introduce unnecessary friction into the purchase journey and risk sending customers to competitors. 

The good news is that both components integrate natively with Shopify, requiring no custom development.

Why you need digital wallets

Digital wallets, such as Apple Pay, Google Pay, and PayPal, autofill delivery and payment information with a single click, eliminating the friction of typing on a small screen. 

This ease of use can shorten the purchase journey to just a few clicks between a social ad and checkout.

Adoption is accelerating. Up to 64% of Americans use digital wallets at least as often as traditional payment methods, and 54% use them more often.

Eliminate price objections with BNPL

Beyond payment convenience, customers also expect flexibility. 

BNPL providers, including Klarna and Afterpay, allow buyers to spread payments over time, reducing price objections at checkout. 

These options contributed $18.2 billion to online spending during last year’s holiday season – an all-time high, according to Adobe.

Together, digital wallets and BNPL form the foundation of a modern, mobile-first checkout experience. 

With these essentials in place, Shopify users can focus on tools that re-engage customers and bring them back to complete their purchases.

Dig deeper: The ultimate Shopify SEO and AI readiness playbook

Tier 2: The re-engagement power players

The second tier focuses on re-engagement – tools designed to bring back customers who have already shown intent. 

These integrations improve abandoned-cart recovery, increase repeat purchases, and build trust through social proof.

Re-engage customers with email and SMS

Email remains one of the most effective channels for re-engaging customers at every stage of the journey. 

Klaviyo and Attentive are strong options for Shopify users because both offer deep platform integration with minimal setup.

Both platforms also support SMS, allowing Shopify sellers to send automated text messages directly to customers’ mobile devices. 

SMS consistently delivers higher open, click-through, and conversion rates than email, making it especially effective for re-engagement use cases such as abandoned-cart recovery.

Together, these tools enable targeted campaigns and sophisticated automated flows that drive incremental revenue. 

However, CAN-SPAM and TCPA regulations require explicit opt-in for email and SMS marketing, respectively. 

As a result, sellers can only use these channels to contact customers who have agreed to receive marketing messages.

Use human-centered SMS outreach

While Attentive and Klaviyo effectively reach customers who have opted in to marketing, CartConvert helps sellers engage the 50% to 60% of shoppers who have not. 

The platform uses real people to contact cart abandoners via SMS. Because the outreach is not automated, TCPA restrictions do not apply.

CartConvert agents have live conversations with potential customers about their shopping experience. 

They are familiar with the products and can guide buyers back toward a purchase by suggesting alternatives or offering discounts. 

Running CartConvert alongside Klaviyo or Attentive ensures both subscribers and non-subscribers are included in re-engagement efforts.

Get the newsletter search marketers rely on.


Demonstrate social proof through reviews

Human-centered marketing also plays a role in building buyer confidence. 

Today’s online shoppers rely heavily on reviews when making purchasing decisions. 

When reviews are integrated directly into the shopping experience, they help establish trust and legitimacy, which in turn drive higher conversion rates. 

A product with five reviews is 270% more likely to be purchased than one with no reviews, research from the Spiegel Research Center at Northwestern University found.

Shopify users can choose from several review aggregators that pull Google reviews into product pages. 

Sellers should prioritize aggregators that also sync with Google Merchant Center, which powers Google Ads. 

Tools such as Okendo, Yotpo, and Shopper Approved integrate smoothly with both Shopify and Google’s ecosystem.

When reviews sync with Merchant Center, they can appear in Google Shopping ads, improving ad performance. 

While these tools add cost, they are also proven to generate incremental revenue that offsets the investment.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Tier 3: Advanced optimization

The final tier includes more advanced integrations designed to help sellers optimize their sales funnel and performance at scale.

Attribution and analytics: Triple Whale

GA4’s changes to reporting, session logic, and interface have made attribution more difficult for many ecommerce teams. 

As a result, sellers are increasingly seeking clearer, independent performance insights.

Since 2023, Triple Whale has emerged as a leading alternative to Google Analytics, offering third-party attribution tools that integrate seamlessly with Shopify. 

The platform supports multiple attribution models – including first-click, last-click, and linear – along with cross-platform cost integration.

It also provides real-time data, which Google Analytics does not. 

This capability becomes especially valuable during high-pressure sales periods, such as Black Friday, when delayed reporting can lead to missed opportunities.

Although Triple Whale can cost up to $10,000 annually for mid-size brands, the improved data quality often justifies the investment for teams scaling paid acquisition.

Landing page customization: Replo

For sellers focused on improving conversion rates, landing page testing is essential. 

While Shopify is relatively easy to use, making changes to a live storefront for A/B testing carries the risk of breaking the site.

Replo allows Shopify users to build custom landing pages that can be tested at scale without coding. 

These pages typically provide a better user experience than default Shopify themes. 

It can also use site data to personalize landing pages based on a shopper’s browsing history. 

As a result, Replo-built pages often convert at higher rates than static site pages.

TikTok ads integration

TikTok continues to grow as a paid media channel, but it has traditionally presented a higher barrier to entry for advertisers. 

Previously, sellers needed an active TikTok account and could only purchase ads within the app, adding complexity and cost.

TikTok’s Shopify integration allows sellers to create ads that link directly to their websites, rather than keeping users inside the app. 

This change has lowered the barrier to entry and expanded access to the platform. 

Early testing shows promise for use cases such as cart abandonment, making the integration worth exploring despite its relative immaturity.

Dig deeper: Ecommerce SEO: Start where shoppers search

Prioritizing Shopify integrations for maximum impact

Shopify is a powerful platform for ecommerce, but maximizing results requires going beyond its default features. 

  • Start with essentials such as digital wallets and BNPL to reduce checkout friction. 
  • Then layer in email, SMS, and review integrations to re-engage interested shoppers. 
  • Finally, add analytics, attribution, and landing-page testing to optimize performance at scale.

Sellers do not need to implement every solution at once. 

Instead, conduct a quick audit of the existing stack against this framework, identify gaps, and prioritize the tools that improve conversion and re-engagement. 

Shopify’s flexibility is its greatest strength, and its app ecosystem enables sellers to turn more visitors into buyers.

Google says doing optimization for AI search is ‘the same’ as doing SEO for traditional search

Optimizing for AI search is “the same” as optimizing for traditional search, Google SVP of Knowledge and Information Nick Fox said in a recent podcast. His advice was simple: build great sites with great content for your users.

More details. Fox made the point on the AI Inside podcast, during an interview with Jason Howell and Jeff Jarvis. Here is the transcript from the 22 minute mark:

Jarvis: “Is there guidance for enlightened publishers who want to be part of AI about how they should view, should they view their content in any way differently now?”

Fox: “The short answer is no. The short answer is what you would have built and the way to optimize to do well in Google’s AI experiences is very similar, I would say the same, as how to perform well in traditional search. And it really does come down to build a great site, build great content. The way we put it is: build for users. Build what you would want to read, what you would want to access.”

Why we care. Many of you have been practicing SEO for many years, and now with this AI revolution in Search, you should know you are very well equipped to perform well in AI Search with many, if not all, of the skills you learned doing SEO. So have at it.

The video. Is AI Search Hurting The Open Web? With Google’s Nick Fox // AI Inside #104

💾

Build great sites, great content, for your users, according to Nick Fox, SVP of Knowledge and Information at Google.

Help us shape SMX Advanced 2026. You could win an All Access pass!

We celebrated a major milestone in June: the return of SMX Advanced as an in-person event. It was our first since 2019.

More than a conference, SMX Advanced 2025 was a reunion. Search marketers from around the world came together to connect, exchange ideas, and learn the most current and advanced insights in search.

But search never stands still. With rapid shifts in AI SEO, constant algorithm changes, and the challenge of balancing generative AI with a human touch, the need for truly advanced, actionable education has never been greater.

Help shape SMX Advanced 2026

We’re committed to making the SMX Advanced 2026 program our most relevant, advanced, and exciting deep-dive experience yet. And we can’t do it without you – the expert community that makes this event legendary.

We’re inviting you to directly shape the curriculum for 2026.

Help us build a program that tackles the biggest challenges and opportunities on your radar by completing our short survey. Tell us:

  • What advanced topics are most critical to your professional growth right now.
  • Which recent search changes or complexities are keeping you up at night.
  • Which search industry experts and innovators you need to hear from.
  • Which session formats – from deep-dive clinics to lightning talks and interactive panels – will help you learn more and retain what you learn.

Fill out the survey here.

Be entered to win an All Access pass

To thank you for your time and insights, everyone who completes the survey will have the opportunity to enter an exclusive drawing.

One lucky participant will win a coveted All Access pass to SMX Advanced 2026, taking place June 3-5 at the Westin Boston Seaport.

Submit a session pitch

Beyond shaping the agenda, we also invite you to submit a session pitch. If you have a breakthrough strategy, an innovative case study, or next-level insights, this is your chance to help lead the industry conversation.

Read our guide to speaking at SMX for more details on how to submit a session idea. When you’re ready, create your profile and send us your session pitch.

We look forward to your submissions and insights! If you have any questions, feel free to reach out to me at kathy.bushman@semrush.com.

Amazon Exposes Years-Long GRU Cyber Campaign Targeting Energy and Cloud Infrastructure

Amazon's threat intelligence team has disclosed details of a "years-long" Russian state-sponsored campaign that targeted Western critical infrastructure between 2021 and 2025. Targets of the campaign included energy sector organizations across Western nations, critical infrastructure providers in North America and Europe, and entities with cloud-hosted network infrastructure. The activity has

AMD AM4 CPU pricing spike as PC market forces alternative upgrades

High DDR5 pricing is causing AMD/AM4 CPU price rises Consumer-grade DDR5 memory modules have seen price increases of 178-258%, forcing PC builders to consider alternative upgrade paths. Instead of moving to newer DDR5 platforms, some PC builders are upgrading to AMD’s DDR4-based AM4 platform. Why? The answer is simple: they can keep using the DDR4 memory they […]

The post AMD AM4 CPU pricing spike as PC market forces alternative upgrades appeared first on OC3D.

(PR) Call of Duty: Black Ops 7 Season 01 Enhanced with AMD FSR Redstone Features

Call of Duty: Black Ops 7 Season 01 kicks off with a bang: remastered classics like Nuketown 2025 and Standoff, new maps like Fate, Utopia, and Odysseus, and a wild holiday variant, Sleighjacked. Multiplayer fans get a buffet of modes—Prop Hunt, Sticks and Stones, One in the Chamber, Sharpshooter, and Gun Game—each tuned for maximum chaos and competition. Endgame received colossal new World Events. And for Zombies fans? The new Astra Malorum Map and Survival Map: Exit 115 are waiting, with new enemies (hello, O.S.C.A.R.), The Mule Kick Perk-a-Cola, and a new Wonder Weapon to discover.

AMD FSR: Ready to Run
Season 01 is the first to launch with AMD's FSR Redstone features. What does that mean for players? In short: smoother performance, sharper visuals, and a more immersive experience—no matter how intense the firefight gets. FSR is built on machine learning, delivering next-gen upscaling, frame generation, and ray regeneration. Whether you're pushing for 240 Hz on a Radeon RX 9000 series card or just want your game to look and feel incredible at any frame rate, Redstone has you covered

(PR) SEMI Reports Global Semiconductor Equipment Sales to Reach $156 Billion in 2027

Global sales of total semiconductor manufacturing equipment by original equipment manufacturers (OEMs) are forecast to reach a record high of $133 billion in 2025, growing 13.7% year-on-year, SEMI announced today in its Year-End Total Semiconductor Equipment Forecast - OEM Perspective at SEMICON Japan 2025. Growth in semiconductor manufacturing equipment sales is expected to continue in the two following years of the forecast period, with projections of $145 billion in 2026 and $156 billion in 2027. This growth will be driven primarily by investments related to AI, particularly in leading-edge logic, memory, and the adoption of advanced packaging technologies.

"Global semiconductor equipment sales show robust momentum, with both the front-end and back-end segments projected to see three consecutive years of growth, culminating in total sales surpassing $150 billion for the first time in 2027," said Ajit Manocha, SEMI president and CEO. "Investments to support AI demand have been stronger than anticipated since our midyear forecast, leading us to boost the outlook for all segments."

(PR) D-Link Launches Nuclias Unity Network Management Platform

D-Link Corporation, a global leader in networking and connectivity solutions, announced the launch of Nuclias Unity, a next-generation license-free cloud network management platform. Purpose-built for organizations ranging from SMBs to large, multi-site enterprises, Nuclias Unity delivers unified control, simplified operations, and enterprise-grade reliability—without the traditional cost and complexity of licensed cloud platforms.

Backed by D-Link's decades of networking expertise, Nuclias Unity empowers IT teams to manage wired and wireless networks through a unified cloud management platform, accelerating cloud adoption while ensuring visibility, consistency, and operational efficiency across all business environments.

Colorful Intros Surprisingly Small iGame RTX 5070 Mini OC and RTX 5060 Ti Mini OC Series

Colorful today introduced its iGame Mini RTX 50-series graphics cards. Not only do these cards meet NVIDIA's SFF-Ready specification for compact graphics cards, but exceed it by miles (rather, inches). The series is led by the iGame GeForce RTX 5070 Mini OC, followed by the iGame GeForce RTX 5060 Ti 16 GB Mini OC, and its 8 GB variant. Both the RTX 5070 and RTX 5060 Ti based cards appear to have a similar cooling solution, which contributes to their tight dimensions of 18 cm length, 12.3 cm height, and strictly 2-slot thickness.

The cooler of these cards features a dense aluminium fin-stack heatsink with the fins protruding out from the cooler shroud, much like NVIDIA's Founders Edition coolers. There is a single 95 mm axial airflow fan ventilating it. The backplate is solid metal, with a matte surface. On the RTX 5060 Ti 16 GB Mini OC, it cools half the memory chips. The iGame RTX 5070 Mini OC comes with factory OC of 2557 MHz compared to 2512 MHz reference; while both the RTX 5060 Ti cards come with 2632 MHz boost frequencies compared to 2572 MHz reference. The iGame RTX 5070 Mini OC draws power from a 16-pin 12V-2x6 connector rated for 300 W input, while the RTX 5060 Ti Mini OC cards come with single 8-pin PCIe power connectors. The company didn't reveal pricing.

Sapphire rep predicts DRAM prices will begin to stabilize in the next 6-8 months, but warns 'it may not be the prices we want' — GPU vendor says memory crisis is similar to tariff uncertainty

Amidst the economic uncertainty ushered in by this AI boom, some folks still have conviction and are offering hope to the community. Edward Crisler, the PR manager for GPU maker Sapphire, has just said that he believes DRAM prices will start to plateau in the next few months, so don't panic buy right now.

Progress Stalls: Sheryl Sandberg Warns AI Could Exacerbate Gender Inequality

The post Progress Stalls: Sheryl Sandberg Warns AI Could Exacerbate Gender Inequality appeared first on StartupHub.ai.

The latest Lean In-McKinsey study reveals a stark truth: progress for women in the workplace is not just slowing, it’s stalling. Sheryl Sandberg, a pivotal figure in advocating for women’s leadership, returned to the public spotlight to deliver this sobering message, underscoring how emerging technologies like artificial intelligence threaten to further widen the gender gap. […]

The post Progress Stalls: Sheryl Sandberg Warns AI Could Exacerbate Gender Inequality appeared first on StartupHub.ai.

AI Fuels Megadeal Surge, Redefining M&A Landscape

The post AI Fuels Megadeal Surge, Redefining M&A Landscape appeared first on StartupHub.ai.

Nearly a quarter of megadeals this year were AI-driven, a stark indicator of artificial intelligence’s transformative power in the M&A landscape. This trend, highlighted by Paul Griggs, U.S. Senior Partner at PwC, during his recent interview with Frank Holland on CNBC’s Worldwide Exchange, underscores a pivotal shift where strategic positioning and technological advancement are paramount. […]

The post AI Fuels Megadeal Surge, Redefining M&A Landscape appeared first on StartupHub.ai.

Unifying AI Operations: Flexible Orchestration Beyond Kubernetes

The post Unifying AI Operations: Flexible Orchestration Beyond Kubernetes appeared first on StartupHub.ai.

The sheer velocity of AI innovation demands an infrastructure that can adapt, not just scale. At IBM’s TechXchange in Orlando, Solution Architect David Levy and Integration Engineer Raafat “Ray” Abaid illuminated the critical need for a paradigm shift in how AI and machine learning workloads are managed, moving beyond the traditional automation paradigms. Their discussion […]

The post Unifying AI Operations: Flexible Orchestration Beyond Kubernetes appeared first on StartupHub.ai.

House proposes bill to advance data center buildout speed

The post House proposes bill to advance data center buildout speed appeared first on StartupHub.ai.

The proposed legislation, dubbed “The SPEED Act,” seeks to significantly reduce the time required for permitting and construction of data centers and associated power infrastructure. This is a crucial development, as the voracious appetite of AI for computational power necessitates a corresponding acceleration in the physical infrastructure that supports it. The bill proposes to limit […]

The post House proposes bill to advance data center buildout speed appeared first on StartupHub.ai.

Google fixes weeks-long Search Console Performance report delay

Screenshot of Google Search Console

Google Search Console appears to have fixed the weeks-long delay in Performance reports. After several weeks of 50+ hour lag times, the reports now seem up to date as of the past few hours.

Now up-to-date. If you check the Search Performance report now, you should see a normal delay of about two to six hours. Over the past few weeks, that delay had stretched to more than 70 hours.

This is what I see:

The delays began a few weeks ago and took roughly three weeks to fully clear, including the backlog of data.

Page indexing report. Meanwhile, the Page Indexing report delay we reported weeks ago is still unresolved. The report is now almost a month behind, and Google has not fixed it yet. Google posted a notice at the top of the report that says:

  • “Due to internal issues, this report has not been updated to reflect recent data”

Why we care. If you rely on Search Console data for analytics and stakeholder or client reporting, this has been extremely frustrating. The Performance reports now appear to be updating normally, but the Page Indexing report remains heavily delayed and will continue to create reporting headaches.

Meanwhile, Google released a number of new features in the past few weeks, including:

How to boost ROAS like La Maison Simons by Channable

Managing large catalogs in Google Performance Max can feel like handing the algorithm your wallet and hoping for the best. 

La Maison Simons faced that exact challenge: too many products and not enough control. Then they rebuilt their segmentation with Channable Insights and turned a “black box” campaign into a revenue-generating machine.

Step 1: Stop segmenting by category

Simons originally split campaigns by product category. It sounded logical – until their best-selling sweater ate the budget and newer or overlooked products never had a chance to surface.

Static segmentation meant limited visibility and slow decisions.

Marketers stayed stuck making manual tweaks while Google kept auto-prioritizing only what was already working.

Step 2: Segment by performance

Enter Channable Insights. Product-level performance data (ROAS, clicks, visibility) now powers dynamic grouping:

Chart showing product segments: "Star Products" with a star, "Zombie Products" with a zombie face, "New Arrivals" with sparkles. Each has goals and strategies.

Products automatically move between these segments as performance shifts – no manual work needed. As Etienne Jacques, Digital Campaign Manager, Simons, put it:

“One super popular item no longer takes all the money.”

Step 3: Shorten your analysis window

Instead of waiting 30 days for signals, Simons switched to a rolling 14-day window.

The result: faster reactions, sharper accuracy, and less wasted spend in a fast-moving catalog.

Step 4: Push the strategy across channels

Why stop at Google? The same segmentation logic was automatically applied on:

  • Meta
  • Pinterest
  • TikTok
  • Criteo

Cross-channel consistency creates compounding optimization.

Step 5: Watch the metrics climb

Without raising ad spend, Simons unlocked:

  • ROAS growth: from ~800% to ~1500%
  • CPC decrease: $0.37 to $0.30
  • CTR lift: 1.45% to 1.86%
  • 14% increase in average order value
  • 1300% ROAS for New Arrivals campaigns
  • Faster workflows and fewer manual tweaks

Even the “invisibles” turned into surprise profit drivers once they finally got the spotlight.

Step 6: Treat automation as control, not chaos

Automation restored marketing control – it didn’t remove it.

Teams can finally learn from the data and influence which products grow, instead of letting PMax run everything on autopilot.

A table with a yellow header reading 'Quick Rules to Implement.' Two columns titled 'Principle' in pink and 'Why It Matters' in blue. Four empty rows beneath, with a colorful logo in the bottom left corner.

Your action plan

  • Classify products as Stars, Zombies, and New Arrivals.
  • Automate campaign reassignment based on real-time data.
  • Refresh product insights every 14 days.
  • Roll out segmentation logic to every paid channel.
  • Scale what wins – test what hasn’t yet.

Want Simons-style ROAS gains without extra ad spend? Start by testing the quality of your product data with a free feed and segmentation audit.

Why Data Security and Privacy Need to Start in Code

AI-assisted coding and AI app generation platforms have created an unprecedented surge in software development. Companies are now facing rapid growth in both the number of applications and the pace of change within those applications. Security and privacy teams are under significant pressure as the surface area they must cover is expanding quickly while their staffing levels remain largely

Fortinet FortiGate Under Active Attack Through SAML SSO Authentication Bypass

Threat actors have begun to exploit two newly disclosed security flaws in Fortinet FortiGate devices, less than a week after public disclosure. Cybersecurity company Arctic Wolf said it observed active intrusions involving malicious single sign-on (SSO) logins on FortiGate appliances on December 12, 2025. The attacks exploit two critical authentication bypasses (CVE-2025-59718 and CVE-2025-59719

Rockstar says that the IWGB union 'have no idea who was in this Discord' as the GTA 6 developer continues to claim that the firings of former devs were over leaks of 'specific game features'

Rockstar Games has once again addressed accusations of union-busting from the Independent Workers Union of Great Britain (IWGB), claiming that it fired over 30 Grand Theft Auto 6 developers for sharing company secrets in a Discord channel.

It’s bad – Here’s how much DDR5 pricing has increased

How much have DDR5 memory prices increased? We all know that DDR5 memory pricing has shot up, but how bad is the situation? Has AI-driven datacenter demand ruined the DRAM market? Yes, but how much is it hitting our wallets? Today we have looked at today’s DRAM pricing and have compared it to 30 days […]

The post It’s bad – Here’s how much DDR5 pricing has increased appeared first on OC3D.

Thermaltake Intros TH360 V3 Ultra ARGB Sync Snow Edition CPU Cooler

Thermaltake today introduced the TH360 V3 Ultra ARGB Sync Snow Edition, a premium all-in-one liquid CPU cooler that's positioned a notch below the company's Minecube Ultra series. The cooler features a cube-shaped pump-block, with a larger 4-inch, square true-color display floating on top. The cubical region stays recessed to ensure clearance around the CPU socket area. The 4-inch square display comes with a decent resolution of 720 x 720 pixels, and since it's square, the display-head can be rotated via software without any moving parts. The display connects to TT RGB Plus software, which puts out system monitoring info pulled from ACPI, and also lets you change backgrounds, and the cooler's lighting. Speaking of which, you get an ARGB diffuser framing the screen from the sides, and each of the three included 120 mm fans comes with RGB lighting of its own.

The pump turns at speeds ranging between 800 and 2,500 RPM. The cooler comes with 46 cm long tubing. Each of the three included 120 mm fans ventilating the 360 mm radiator turns at 500 to 2,500 RPM, pushing up to 85.29 CFM of airflow at 3.86 mm H₂O static pressure, and up to 37.8 dBA of noise. All current desktop CPU socket types are supported, including AM5, AM4, LGA1851, LGA1700, and LGA115x/LGA1200. Thermaltake claims that the cooler can handle thermal loads of up to 365 W TDP, making it suitable for enthusiast-segment processors. The company didn't reveal pricing.

(PR) Enermax Launches REVOLUTION III S 1000 W Platinum ATX 3.1 PSU

ENERMAX, an industry-leading force dedicated designing high-performance computer hardware and cooling solutions, proudly announces the launch of the REVOLUTION III S 1000 W, a premium fully modular power supply designed to meet the latest standards in performance, reliability, and aesthetics. Available in both black and white, the REVOLUTION III S 1000 W offers enthusiasts and professionals the flexibility to build powerful systems without compromise. The REVOLUTION III S comes with a 13-year warranty that highlights ENERMAX's confidence in the product's reliability and long-term durability.

The newest addition to the REVOLUTION line is 80 Plus, Cybenetics, and PPLP Platinum certified, ensuring outstanding efficiency and reduced power consumption. Fully compliant with the Intel ATX 3.1 standard, the REVOLUTION III S guarantees stable and reliable power delivery for not only the latest generation of high-performance graphics cards and CPUs, but also the upcoming generations too. Equipped with one 12V-2x6 interface, the REVOLUTION III S 1000 W can deliver up to 600 W of dedicated power to the latest GPU, making it future-ready for enthusiast builds.

DRAM Price Hikes Have Minimal Impact on PC OEMs, Notes Report

The global DRAM shortage has driven up the prices of individual RAM kits, but PC OEMs have largely remained unaffected. Acer and ASUS indicated this week that rising memory costs are starting to influence notebook pricing, though retail price changes are still limited for now. Some brands, such as Dell, may begin raising prices on select high-end and business models, but neither Acer nor ASUS has officially changed their MSRPs in any way. Company executives warned that as new orders enter the market in the coming quarters, memory inflation will increasingly be reflected in end-product pricing. However, pricing remains stable for the time being, and there are no price changes.

Much of the near-term price stability comes from long-term supply agreements that protect OEMs and ODMs from paying spot market rates. Acer's CEO noted that memory historically accounted for about 8% to 10% of a PC's bill of materials, and that a 30% to 50% rise in memory costs so far has resulted in only an approximate 2% to 3% impact on the total BOM cost. Since many manufacturers secure memory through contracts that renew on quarterly or multi-year cycles, wider price fluctuations effects are likely to appear gradually, with more noticeable changes expected from the second quarter and into the third quarter of 2026 as contracts reset.

Samsung denies ending SATA SSD production amid NAND squeeze


The controversy comes amid escalating demand for semiconductor memory driven by the growth of artificial intelligence infrastructure. Much of the industry's available NAND flash, once destined for consumer hardware such as SSDs, is now being redirected toward hyperscalers and AI labs. That shift has created one of the most constrained...

Read Entire Article

The data center cooling state of play (2025) — Liquid cooling is on the rise, thermal density demands skyrocket in AI data centers, and TSMC leads with direct-to-silicon solutions

The rise of AI and hyperscale computing is driving a global shift from air-based to liquid and embedded cooling as various companies are developing silicon-integrated systems capable of handling multi-kilowatt system-in-packages that can be commercialized by 2027.

Physical AI’s Off-Screen Revolution: Sanjit Biswas on Scaling Real-World Impact

The post Physical AI’s Off-Screen Revolution: Sanjit Biswas on Scaling Real-World Impact appeared first on StartupHub.ai.

The next transformative wave of artificial intelligence is unfolding not in the digital ether, but in the tangible, messy reality of the physical world. This was the central thesis articulated by Sanjit Biswas, CEO of Samsara, in a recent discussion with Sequoia Capital’s Sonya Huang and Pat Grady. Biswas, a serial founder known for scaling […]

The post Physical AI’s Off-Screen Revolution: Sanjit Biswas on Scaling Real-World Impact appeared first on StartupHub.ai.

Samsung refutes consumer SSD phase-out rumours

Samsung denies SATA SSD phase-out rumours, calling them false Samsung has officially denied reports that it plans to phase out its SATA SSDs and other consumer products. This follows recent rumours that Samsung planned to wind down its SATA SSD production to free up manufacturing capacity for data centre and AI customers. With Micron killing […]

The post Samsung refutes consumer SSD phase-out rumours appeared first on OC3D.

Intel Installs ASML TWINSCAN EXE:5200B High-NA EUV Machine for 14A Node

Intel Foundry announced that it has managed to install the world's most advanced EUV machine—ASML's TWINSCAN EXE:5200B High-NA EUV scanner—in its facilities. The company is producing its 14A node using High-NA EUV lithography, marking the first industry transition from Low-NA. In collaboration with ASML, Intel has completed acceptance testing at Intel Foundry for its 14A node to enhance wafer output. The TWINSCAN EXE:5200B is ASML's second version of High-NA EUV scanners, following the TWINSCAN EXE:5000, which Intel initially used for its 14A trial runs. Intel previously reported processing over 30,000 wafers in a single quarter, achieving simplified manufacturing by reducing the steps needed for a specific layer from 40 to fewer than 10, resulting in significantly faster cycle times.

The new TWINSCAN EXE:5200B achieves an output of 175 wafers per hour in standard conditions, where Intel plans to tune it to over 200 wafers per hour. The machine also advances overlay precision, enabling accurate alignment of distinct lithography layers down to 0.7 nanometers. This achievement builds on Intel's High NA EUV experience, which began in 2023 with the installation of the industry's first commercial High NA tool at its Oregon research and development facility. Intel is currently shipping 14A PDK 0.5 to customers, who are reportedly very satisfied with the node's development. The company itself has praised the 14A node development as it has been achieving far better yield and performance parameters at this stage of development than the 18A node.

(PR) AAEON's BOXER-6648-ARS Delivers Intel Core Ultra Series 2 Power in Rugged Box PC Form

Leading provider of industrial PC solutions AAEON (Stock Code: 6579), today introduced the BOXER-6648-ARS, its first fanless embedded Box PC featuring the new Intel Core Ultra Processors (Series 2) range (formerly Arrow Lake). Available in two SKUs offering either the Intel H810 (A1) or Intel Q870 Chipset (A2), the system offers a choice of Intel Core Ultra 9 Processor 285/285T, Intel Core Ultra 7 Processor 265/265T, or Intel Core Ultra 5 Processor 225/225T CPUs. As a result, the system can provide up to 24 cores of processing power alongside up to 36 TOPs of AI inferencing performance via new Intel platform's integrated CPU, GPU and NPU die architecture.

Primarily designed for more complex or AI-driven industrial automation applications, the BOXER-6648-ARS is equipped with six DB-9 ports for RS-232/422/485, an 8-bit DIO terminal block, and three LAN ports (two 2.5GbE, one 1GbE). The model based on the Intel H810 Chipset adds four USB 3.2 (5 Gbps) and four USB 2.0 ports to this selection, while the Intel Q870 Chipset model is more expansive, with six USB 3.2 (10 Gbps) ports and two USB 2.0. The other main differentiator between the two SKUs is Intel Active Management Technology support on the two 2.5GbE LAN ports, which is reserved for the Intel Q870 Chipset (A2) model only.

Intel Selects Pushkar Ranade as Interim Chief Technology Officer

Intel announced significant changes in its senior leadership today, particularly in a key visionary role second only the CEO. Since Sachin Katti's departure a few weeks ago, the chief technology officer position has been vacant. Pushkar Ranade has now been appointed as the interim CTO. He will reportedly "help formulate the company's advanced technology strategy and to consolidate and develop critical emerging technologies, such as quantum computing, advanced interconnects, and novel materials within the new CTO Office." Pushkar Ranade has contributed to various Intel node developments and has been with the company for more than a decade, working on projects from the 65 nm node development to 7 nm SoCs. As interim CTO, he will assist CEO Lip-Bu Tan in executing his vision for a revitalized Intel.

Intel also made a few other leadership changes. Robin Colwell assumes the role of leading Intel's senior government affairs, where she will have worldwide engagement with policymakers, regulators, and industry stakeholders. Additionally, Annie Shea Weckesser joins as senior vice president and chief marketing and communications officer, leading the company's newly integrated global marketing and communications organization. She will unify corporate reputation, brand strategy, and market engagement. Intel's CEO Lip-Bu Tan pointed out that that all new executives offer the specialized knowledge and strategic vision essential for Intel's sustained success.

VT Chat – Minimal AI Chat with Deep Research Features


Introducing VT Chat, a privacy-first AI chat application that keeps all your conversations local while providing advanced research capabilities and access to 15+ AI models including Claude 4 Sonnet and Claude 4 Opus, O3, Gemini 2.5 Pro and DeepSeek R1.

Research features: Deep Research does multi-step research with source verification, Pro Search integrates real-time web search with grounding web search powered by Google Gemini.

There's also document processing for PDFs, a "thinking mode" to see complete AI reasoning, and structured extraction to turn documents into JSON. AI-powered semantic routing automatically activates tools based on your queries.

View startup

React2Shell Vulnerability Actively Exploited to Deploy Linux Backdoors

The security vulnerability known as React2Shell is being exploited by threat actors to deliver malware families like KSwapDoor and ZnDoor, according to findings from Palo Alto Networks Unit 42 and NTT Security. "KSwapDoor is a professionally engineered remote access tool designed with stealth in mind," Justin Moore, senior manager of threat intel research at Palo Alto Networks Unit 42, said in a

TechPowerUp x Chieftec Mega Giveaway: Entries Close Soon, Hurry!

TechPowerUp partners with Chieftec to bring our readers from the European Union a chance to grab as many as six pieces of Chieftec gaming PC hardware, but you'd better hurry, entries close soon! Up for grabs are a Chieftec The Cube cube-shaped microATX case; a spacious Chieftec Hunter 3 EATX tower case; a Chieftec Apex Lumo ATX mid-tower case, a Chieftec Stealth 1000 W ATX 3.1 modular power supply; a Chieftec Iceberg 360 RGB AIO CLC, and a Chieftec Iceberg White 360 AIO CLC. Entries have been open over the past week, but close on December 19. It's really easy to join in, just fill up a little form to help us get back to you if you've won!

For more details, and to participate, visit this page.

(PR) GIGABYTE Announces Availability of AORUS Prime 5 Gaming Desktop

GIGABYTE, the world's leading computer brand, announces that the AORUS PRIME 5 is now officially available. This high-performance desktop system is built on a new architecture with flagship-grade hardware, paired with GIGABYTE's signature cooling innovations and advanced fan technology. Designed to deliver exceptional speed and long-term reliability for gaming and multitasking, the AORUS PRIME 5 combines power, precision, and design with true plug-and-play simplicity.

The AORUS PRIME 5 not only features up to an AMD Ryzen 7 9800X3D processor and NVIDIA GeForce RTX 5080 graphics cards for multicore and next-generation AI performance, but is also built entirely with GIGABYTE products, including a 2 TB SSD and 32 GB of high-speed RGB memory for lightning-fast responsiveness and seamless multitasking. This configuration embodies GIGABYTE's DNA of proven stability, even extending that excellence into a fully integrated cooling solution.

NVIDIA Acquires SchedMD, Bolstering AI Infrastructure

The post NVIDIA Acquires SchedMD, Bolstering AI Infrastructure appeared first on StartupHub.ai.

NVIDIA's acquisition of SchedMD, the creator of Slurm, strategically enhances its control over critical open-source workload management for HPC and AI.

The post NVIDIA Acquires SchedMD, Bolstering AI Infrastructure appeared first on StartupHub.ai.

Google to Shut Down Dark Web Monitoring Tool in February 2026

Google has announced that it's discontinuing its dark web report tool in February 2026, less than two years after it was launched as a way for users to monitor if their personal information is found on the dark web. To that end, scans for new dark web breaches will be stopped on January 15, 2026, and the feature will cease to exist effective February 16, 2026. "While the report offered general

Samsung Denies NAND Flash Exit as Sapphire PR Manager Calls for Calm

The gaming hardware industry has been in a bit of a state of late as a result of increased demand for DRAM causing a supply shortage and massive price hikes. As a result of the aforementioned issues, Samsung has been rumored to be converting some of its HBM3E and NAND production capacity to DRAM in order to meet demand. The ensuing rumors claimed that Samsung was planning an exit from the NAND, and thus SATA SSD market, as a result of the shift in focus. Micron's recent exit from the consumer space, and the closure of its Crucial memory and SSD brand, lent credence to these rumors, however, these rumors have recently been addressed by a Samsung spokesperson directly in a response to Wccftech. The spokesperson simply said "The rumor regarding the phasing out of Samsung SATA or other SSDs is false," apparently declining to expand any further. However, this is just one indication that the DRAM crisis may not be as long-lived as some have claimed.

Around the same time, in an interview with Hardware Unboxed, Edward Crisler, the PR manager for Sapphire, cautioned PC gamers and potential buyers against panic buying, saying that, "the good news is, I don't think the real pain, that we're suffering now and for the next six months or so, is going to last much longer than that," although he goes on to say that the actual issue at hand is uncertainty of the situation. The implication is that the market will eventually begin to stabilize within the next six to eight months. The implication isn't necessarily that DRAM prices will return to normal, but rather that DRAM supply will eventually catch up to supply. It could also be the case that the massive AI and datacenter boom currently causing the shortages will slow in the coming months, which would also help to stabilize things somewhat. Crisler is careful to note that DRAM prices may still remain somewhat elevated after the market stabilizes, but he seems to be convinced that the sky-high prices we're seeing for consumer memory will fall to some degree and that the gaming industry will adapt to whatever the end result is of the market shake-up.

A Japanese startup built a speaker that's basically a sheet of fabric


The technology originated at Japan's National Institute of Advanced Industrial Science and Technology (AIST) in 2018, where researchers demonstrated a thin, lightweight, bendable electronic textile. Sensia's new product represents the first commercial application of that research, adapting the concept into a consumer-ready format that requires no traditional speaker cones or enclosures.

Read Entire Article

Outage Owl – Stop finding out about vendor outages from angry customer tickets.


Outage Owl monitors 20+ vendor status pages in real time and alerts your team and customers when issues arise. Add a single script to show a website banner during outages and connect Slack to notify your team before tickets pile up. Create custom incidents and messages, tune alert rules, delays, and quiet hours, and keep everyone informed within seconds. Set up in under five minutes, and start free with one alert rule.

View startup

Zinggit – Voice note to text with AI


Zinggit is an AI-powered voice note to text app designed to take your idea to content quicker. No more typing out ideas, or trying to create an outline. Just speak your thoughts and 'vibe type' your first draft. Perfect for busy business owners with tons of ideas, agency owners trying to sound out their next article, or social media managers who want to summarise an idea into a post.

View startup

White House AI Czar David Sacks on Navigating the AI Frontier: Regulation, Race, and Jobs

The post White House AI Czar David Sacks on Navigating the AI Frontier: Regulation, Race, and Jobs appeared first on StartupHub.ai.

The rapid acceleration of artificial intelligence has ignited a multifaceted debate spanning innovation, national security, and economic impact, a tension vividly explored in a recent CNBC “Closing Bell Overtime” interview. David Sacks, the White House AI and Crypto Czar, spoke with Morgan Brennan about President Trump’s executive order aiming to streamline AI regulation, the intensifying […]

The post White House AI Czar David Sacks on Navigating the AI Frontier: Regulation, Race, and Jobs appeared first on StartupHub.ai.

Star Wars: Fate of the Old Republic's Director Believes New Studio Will Deliver Game Before 2030

Star Wars: Fate of the Old Republic (FotoR) was unveiled at The Game Awards, last week. Prior to introductory events unfolding in Los Angeles, the games industry rumor mill had generated speculation about various "Knights of the Old Republic" (KotoR) remake or reboot projects. Days ago, Lucasfilm and Arcanaut Studios unveiled a spiritual successor—many attendees and stream watchers were surprised by the collaborators' teaser trailer. Long-time franchise fans were happy to see Casey Hudson—a former BioWare veteran—introduce his latest project. SW: Fate of the Old Republic's game director has worked on a number of popular titles, including: Knights of the Old Republic (2003), Jade Empire (2005), the original Mass Effect trilogy (2007 to 2012), and Anthem (2019).

Predictably, well-known figures weighed in with skeptical opinions and predictions. Jason Schreier—of Bloomberg fame—a very recently booted up development cycle: "Lucasfilm says the studio (Arcanaut) was founded this year, which means that 2030 is an 'optimistic' guess. Maybe it'll be a PlayStation 7 game." Schreier's expertise—as a journalist and author—mostly covers the making of modern AAA titles; usually mega expensive and time-consuming affairs. In theory, FotoR's development team "has it easy" due to their game being a "narrative-driven single-player action RPG," rather than a huge open world experience. In a response seemingly directed at outsider estimates, Hudson commented: "don't worry about the 'not till 2030' rumors. Game will be out before then. I'm not getting any younger!"

Phantom – AI website builder that brings your ideas to life in minutes.


Phantom is an AI website builder designed to help anyone go from an idea to a fully functional website in just a few minutes. It handles all the heavy lifting automatically — setting up authentication, database, payments, analytics, and even AI integrations for you. Instead of juggling multiple tools or writing code from scratch, you just describe what you want, and Phantom builds it out instantly.

It runs on a network of specialized AI agents, each focused on a different area like frontend, backend, bug fixing, and review. This makes the process faster, more accurate, and more creative. Phantom lets you skip the setup and get straight to building — without needing technical expertise.

View startup

Tesla’s Trillion-Dollar AI Future: Dan Ives on Autonomy and Robotics

The post Tesla’s Trillion-Dollar AI Future: Dan Ives on Autonomy and Robotics appeared first on StartupHub.ai.

Wedbush Securities’ Dan Ives recently offered a compelling vision of Tesla’s future, asserting that the company, alongside Nvidia, stands at the forefront of the “physical AI revolution.” This isn’t merely about electric vehicles; it’s about the profound convergence of hardware and artificial intelligence to create tangible, real-world autonomous capabilities. Ives’s commentary underscores a pivotal shift […]

The post Tesla’s Trillion-Dollar AI Future: Dan Ives on Autonomy and Robotics appeared first on StartupHub.ai.

Bolmo Advances Byte-Level Language Models with Practicality

The post Bolmo Advances Byte-Level Language Models with Practicality appeared first on StartupHub.ai.

AI2's Bolmo makes byte-level language models practical by "byteifying" existing subword models, offering superior character understanding and flexible inference.

The post Bolmo Advances Byte-Level Language Models with Practicality appeared first on StartupHub.ai.

(PR) NVIDIA Acquires Open-Source Workload Management Provider SchedMD

NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) and AI - to help strengthen the open-source software ecosystem and drive AI innovation for researchers, developers and enterprises. NVIDIA will continue to develop and distribute Slurm as open-source, vendor-neutral software, making it widely available to and supported by the broader HPC and AI community across diverse hardware and software environments.

HPC and AI workloads involve complex computations running parallel tasks on clusters that require queuing, scheduling and allocating computational resources. As HPC and AI clusters get larger and more powerful, efficient resource utilization is critical. As the leading workload manager and job scheduler in scalability, throughput and complex policy management, Slurm is used in more than half of the top 10 and top 100 systems in the TOP500 list of supercomputers. Slurm, which is supported on the latest NVIDIA hardware, is also part of the critical infrastructure needed for generative AI, used by foundation model developers and AI builders to manage model training and inference needs.

(PR) Bungie Goes In-depth with 23-minute-long "Vision of Marathon" Featurette

Behind the scenes the team here at Bungie has been incredibly hard at work on Marathon, our brand-new PvPvE extraction shooter—where the dark sci-fi world of Tau Ceti collides with tense survival FPS gameplay. In Marathon, players scavenge the lost colony of Tau Ceti IV as a bio-cybernetic Runner while surviving against hostile UESC security forces, rival Runners, and unpredictable environments to seek their fortune. Today's ViDoc shares a new look at Marathon's gameplay and immersive sci-fi setting. The team also explores updates since Alpha, like improved visual fidelity, proximity chat, solo play, and a new Runner shell: Rook.

On Tau Ceti, death is the first step
Your journey begins in the proving grounds of Perimeter, where you'll get your cybernetic legs under you and learn the basics of how to extract alive. Then, the anomaly-scarred Dire Marsh ups the ante and takes you to the remains of the human colony that's filled with more danger and bigger rewards. As you grow your vault and survival skills, you'll make your way to Outpost, the UESC's forward base of operations with patrols, locked rooms, and loot that will have you tempting fate at each turn.

ASUS DUAL GeForce RTX 5060 Ti & 5060-series Slimmed Down with "2.1-slot" EVO Models

ASUS has expanded its DUAL graphics card family with a smattering of lower-end models—based on NVIDIA's GB206 "Blackwell" GPU—that offer a slimmer profile. Brand-new listings, added in a low-key manner—are tagged with "EVO" designations. As highlighted by Vortez, the DUAL EVO design comes in at "2.1-slot" thickness (42 mm). This slight shrinkage—mostly in one dimension: width—distinguishes new entries from standard DUAL options. The latter examples exist as 2.5-slot thick (50 mm) offerings. There are negligible differences—between twin axial tech-fan cooled EVO and non-EVO SKUs—in terms of card length and height.

At a glance, ASUS has only introduced this slimmer shroud design across overclocked (OC) edition and non-overclocked GeForce RTX 5060 Ti 16 GB and RTX 5060 8 GB models. Confusingly, this generation's EVO aesthetic actually debuted mid-way through 2025, under the guise of vanilla DUAL GeForce RTX 5050 graphics card options—all truly 2.0-slot (40 mm) thick. This brand-new wave of cards has lead to an adjusted placement of single 8-pin power connectors—DUAL GeForce RTX 5060 Ti and RTX 5060 EVO designs have this aspect positioned closer to their I/O brackets. The TechPowerUp readership can familiarize itself with a myriad of DUAL EVO and DUAL options present within the site's well-maintained GPU database. Fresh entries popped up before the publication of this news piece.

(PR) "Highguard" Launches Next Month - a PvP Raid Shooter Made by Apex Legend/Titanfall Creators

From the creators of Apex Legends and Titanfall, comes Highguard: a PvP raid shooter where players will ride, fight, and raid as Wardens, arcane gunslingers sent to fight for control of a mythical continent. There they'll clash against rival Warden crews for possession of the Shieldbreaker, then break into and destroy the enemy base to secure territory in this all-new breed of shooter. Play FREE when Highguard launches on Steam, Xbox Series X|S, and PlayStation 5 on January 26, 2026

An all-new PvP raid shooter from Wildlight Entertainment. Ride, fight, and raid in the battle for control of a mythical continent. Launching for free early next year. Wishlist now on Steam, Xbox X|S, and PlayStation 5. Learn more about Highguard at www.playhighguard.com, and stay up to date on the latest game news.

Rebuilding American Industry: The AI-Powered Factory Renaissance

The post Rebuilding American Industry: The AI-Powered Factory Renaissance appeared first on StartupHub.ai.

Erin Price-Wright, a General Partner at Andreessen Horowitz, unveiled a compelling vision for “The Renaissance of the American Factory” as part of the firm’s 2026 Big Ideas series. Her presentation posits that America’s industrial muscle, which has atrophied over decades due to offshoring, financialization, and regulatory burdens, is poised for a significant resurgence. This revitalization […]

The post Rebuilding American Industry: The AI-Powered Factory Renaissance appeared first on StartupHub.ai.

Beyond Snippets: The Evolving Landscape of AI Code Evaluation

The post Beyond Snippets: The Evolving Landscape of AI Code Evaluation appeared first on StartupHub.ai.

The rapid ascent of AI in code generation, from single-line suggestions to architecting entire codebases, demands an equally sophisticated evolution in how these models are evaluated. This critical shift was at the heart of Naman Jain’s compelling presentation at the AI Engineer Code Summit, where the Engineering lead at Cursor unpacked the journey of AI […]

The post Beyond Snippets: The Evolving Landscape of AI Code Evaluation appeared first on StartupHub.ai.

Google DeepMind Unveils Gemini 3 and Nano Banana Pro, Redefining AI Development

The post Google DeepMind Unveils Gemini 3 and Nano Banana Pro, Redefining AI Development appeared first on StartupHub.ai.

Google DeepMind recently showcased its latest advancements in artificial intelligence at the AI Engineer Code Summit, where Product Manager Kat Kampf and Product & Design Lead Ammaar Reshi introduced Gemini 3 Pro and Nano Banana Pro. Their presentation, “Building in the Gemini Era,” highlighted how these new models, combined with the Google AI Studio, are […]

The post Google DeepMind Unveils Gemini 3 and Nano Banana Pro, Redefining AI Development appeared first on StartupHub.ai.

Vertex AI Unlocks Flexible Open Model Deployment

The post Vertex AI Unlocks Flexible Open Model Deployment appeared first on StartupHub.ai.

The accelerating pace of AI development has made the deployment of open models a critical challenge, often mired in infrastructure complexities. Google Cloud’s Vertex AI platform, as detailed by Developer Advocate Ivan Nardini in his recent video, “Serving open models on Vertex AI: The comprehensive developer’s guide,” directly addresses this by offering a strategic roadmap […]

The post Vertex AI Unlocks Flexible Open Model Deployment appeared first on StartupHub.ai.

Featured Chrome Browser Extension Caught Intercepting Millions of Users' AI Chats

A Google Chrome extension with a "Featured" badge and six million users has been observed silently gathering every prompt entered by users into artificial intelligence (AI)-powered chatbots like OpenAI ChatGPT, Anthropic Claude, Microsoft Copilot, DeepSeek, Google Gemini, xAI Grok, Meta AI, and Perplexity. The extension in question is Urban VPN Proxy, which has a 4.7 rating on the Google Chrome

Google Ads adds VTC bidding for App campaigns

Google Local Services Ads vs. Search Ads- Which drives better local leads?

Google Ads launched VTC-optimized bidding for Android app campaigns, letting advertisers toggle bidding toward conversions that happen after an ad is viewed rather than clicked.

Previously, VTC worked as a hidden signal inside Google’s systems. Now, it’s a clear, explicit optimization option.

The shift. Google is shifting app advertising away from click-centric logic and toward incrementality and influence, especially for formats like YouTube and in-feed video. This update aligns bidding more closely with how users actually discover and install apps.

Why we care. You can now bid beyond clicks, improving measurement for video-led app campaigns and strengthening the case for upper-funnel activity.

Who benefits most. Video-first app advertisers and teams focused on awareness, engagement, and long-term growth – not just last-click installs.

What to watch

  • Increased reliance on Google’s attribution model.
  • Potential changes in CPA expectations.
  • Greater emphasis on creative quality over click-driving tactics.

First seen. This update was first spotted by Senior Performance Marketing Executive Rakshit Shetty when he posted on LinkedIn.

Sergey Brin: Google ‘messed up’ by underinvesting in AI

Sergey Brin at Stanford Dec. 2025

Sergey Brin, Google’s co-founder, admitted that Google “for sure messed up” by underinvesting in AI and failing to seriously pursue the opportunity after releasing the research that led to today’s generative AI era.

Google was scared. Google didn’t take it seriously enough and failed to scale fast enough after the Transformer paper, Brin said. Also:

  • Google was “too scared to bring it to people” because chatbots can “say dumb things.”
  • “OpenAI ran with it,” which was “a super smart insight.”

The full quote. Brin said:

  • “I guess I would say in some ways we for sure messed up in that we underinvested and sort of didn’t take it as seriously as we should have, say eight years ago when we published the transformer paper. We actually didn’t take it all that seriously and didn’t necessarily invest in scaling the compute. And also we were too scared to bring it to people because chatbots say dumb things. And you know, OpenAI ran with it, which good for them. It was a super smart insight and it was also our people like Ilya [Sutskever] who went there to do that. But I do think we still have benefited from that long history.”

Yes, but. Google still benefits from years of AI research and control over much of the technology that powers it, Brin said. That includes deep learning algorithms, years of neural network research and development, data-center capacity, and semiconductors.

Why we care. Brin’s comments help explain why Google’s AI-driven search changes have felt abrupt and inconsistent. After years of hesitation about shipping imperfect AI, Google is now moving fast (perhaps too fast?). The volatility we see in Google Search is collateral damage from that catch-up mode.

Where is AI going? Brin framed today’s AI race as hyper-competitive and fast-moving: “If you skip AI news for a month, you’re way behind.” When asked where AI is going, he said:

  • “I think we just don’t know. Is there a ceiling to intelligence? I guess in addition to the question that you raised, can it do anything a person can do? There’s the question, what things can it do that a person cannot do? That’s sort of a super intelligence question. And I think that’s just not known, how smart can a thing be?”

One more thing. Brin said he often uses Gemini Live in the car for back-and-forth conversations. The public version runs on an “ancient model,” Brin said, adding that a “way better version” is coming in a few weeks.

The video. Brin’s remarks came at a Stanford event marking the School of Engineering’s 100th anniversary. He discussed Google’s origins, its innovation culture, and the current AI landscape. Here’s the full video.

💾

Sergey Brin, Google co-founder, says Google was slow to scale AI and cautious about chatbots because they say 'dumb things.'

Expect to see HDMI 2.2 in action at CES 2026

Prototype HDMI 2.2 hardware will be showcased at CES The HDMI Licensing Administrator has confirmed that early HDMI 2.2 prototype hardware will be showcased at CES 2026. This will give the world its first look at the next-generation display technology. With HDMI 2.2, the HDMI standard’s maximum bandwidth will increase from 48 Gbps (HDMI 2.1) […]

The post Expect to see HDMI 2.2 in action at CES 2026 appeared first on OC3D.

Thunderobot Teases Upcoming Intel "Panther Lake" APU-based ZERO Air Gaming Laptop

Thunderobot (partnered with Machenike) is set to introduce the ZERO Air gaming laptop at next month's CES trade show. A teaser video was made available last Friday (December 12), but an earlier press release was largely ignored by PC hardware press outlets. The Chinese manufacturer's 2026 "AI gaming" product lineup will make use of "Intel Core Ultra Series 3 processors," including the brand-new 16-inch ZERO Air design. Thunderobot's latest trailer hints at the ZERO Air being one of the first portable devices to feature Intel Core Ultra "Panther Lake" processors combined with NVIDIA GeForce RTX 5000-series Laptop graphics cards. A multitude of Intel partners are expected to showcase Core Ultra 300 series-based laptops and notebooks at CES 2026.

For example, MSI has already revealed its next-gen Prestige lineup, with a view to fully introduce the professional 2026 series in the new year. Members of the press and influencers have already handled early samples, as of late November. Thunderobot's forthcoming lightweight (1.6 kg) gaming laptop is said to offer "dual full-power performance." Pre-launch press material seems to be vague about the meaning of this "dual" system. Last week we heard about ASUS readying an extremely lightweight Zenbook DUO design; this next-gen ultra-slimline notebook was marketed as featuring a dual-battery configuration.

(PR) India Launches DHRUV64, Its First 1 GHz, 64-bit Dual-Core RISC-V CPU

India has achieved a significant milestone in its semiconductor journey with the launch of DHRUV64. It is a fully indigenous microprocessor developed by the Centre for Development of Advanced Computing (C-DAC) under the Microprocessor Development Program (MDP). DHRUV64 provides the nation a reliable, homegrown processor technology. It is capable of supporting strategic and commercial applications. It marks a major advancement in India's pursuit of self-reliance in advanced chip design.

DHRUV64 is built with modern architectural features. It delivers higher efficiency, enhanced multitasking capability and improved reliability. Its advanced design enables seamless integration with a wide range of external hardware systems. The processor's modern fabrication leverages technologies used for high-performance chips. This makes DHRUV64 suitable for sectors such as 5G infrastructure, automotive systems, consumer electronics, industrial automation and the Internet of Things (IoT).

(PR) Alchemy Factory Available Now via Early Access, Devs Anticipate Full Release Within 1 Year

Alchemy Factory is now available in Early Access on Steam. Developed by D5 Copperhead and published by Gamirror Games, this medieval-style automation production game combines simulation management and sandbox building. Early Access is priced at $17.99, with a 10% discount available for the first two weeks.

Alchemy Factory is an automation production game set in a medieval world, blending simulation management and sandbox building. Players take on the role of a magic apprentice who unlocks magical technologies, designs automated production lines, and sells various alchemical goods. The goal is to eventually become a master alchemist whose wealth rivals nations, spanning fields like potion making, metallurgy, and jewelry, and to gradually build your own alchemical kingdom.

MSI Claw A8 Nearing Retail Release in USA, Approx. 4 - 5 Months After Launch in Asia

MSI's AMD APU-powered Claw A8 handheld gaming PC seems to be heading to North American shores. Over the past weekend, web sleuths noticed Newegg and B&H Photo Video listings. At the time of writing, a pre-release price tag—$1149—is attached to the territory specific "BZ2EM-070US" SKU. Reflecting similar past practices, MSI has staggered the rollout of its latest Windows 11-based portable gaming system. The Taiwanese manufacturer has tended to favor Asian territories when distributing the first waves of Claw hardware—notably, the current-gen A8 debuted in China (and nearby) around "late July/early August."

Unlike most of the competition, MSI opted into using Intel APUs across first and second-generation Claw platforms. Even in Core Ultra 7 form, Team Blue's "Meteor Lake" chipsets trailed behind competing AMD-centric hardware. MSI's second wave handheld PCs—Claw 8 AI+ and Claw 7 AI+—were driven by Intel's Core Ultra 7 258V APU. This "Lunar Lake" mobile processor currently competes with Team Red's AMD Ryzen Z2 Extreme SoC, as tracked/observed by Golden Pig Upgrade. Around mid-May, we picked up on whispers of the MSI Claw family diversifying into the AMD APU realm. A couple of months later, the Ryzen Z2 Extreme-powered Claw A8 started to trickle out at retail. Eventually reaching UK stores by September; priced at £849 (~$1135 USD)—available in Polar Tempest (white) or Neon Green finishes. Presently, only the white 1 TB + 24 GB LPDDR5X model is listed by Newegg and B&H.

KIOXIA Prepares Affordable G3 M.2 SSDs with QLC NAND and PCIe 5.0 Connection

KIOXIA is introducing a new storage solution with its Exceria G3 SSD series, an M.2 PCIe NVMe SSD lineup that combines PCIe 5.0 speeds with QLC NAND memory to offer more affordable options for fast PCIe 5.0 storage. Scheduled for release in the fourth quarter of 2025, these M.2 2280 drives will be available in 1 TB and 2 TB capacities. They promise sequential read speeds of up to 10,000 MB/s and write speeds reaching 9,600 MB/s for the larger model. By integrating the latest interface standard with its eighth-generation BiCS FLASH QLC memory, KIOXIA aims to attract users looking to upgrade from older SATA and PCIe 3.0 drives, without pushing them into the higher-priced tier dominated by TLC NAND.

While QLC memory has historically lagged behind TLC in endurance and performance, the 2 TB G3 surpasses KIOXIA's Exceria Plus G4 in sequential writes and random IOPS, achieving 1.6 million read operations and 1.45 million write operations per second. This improvement is due to the BiCS8 architectural enhancements, which enable KIOXIA to rate the drives at 600 TBW for the 1 TB model and 1,200 TBW for the 2 TB version. These durability figures are comparable to most modern TLC offerings. The 1 TB variant is also impressive, with 10,000 MB/s read speeds, 8,900 MB/s write speeds, and 1.3 million IOPS for random reads. Naturally, performance decreases with smaller capacities. Pricing is still unknown, but the point here is affordability, so that might be something to look forward to.

(PR) S.T.A.L.K.E.R. 2 Free Story & Content Update Arrives on December 16

On December 16 we will release a content update for S.T.A.L.K.E.R. 2: Heart of Chornobyl with a brand-new storyline. All of this—completely free for all owners of the game. Christmas arrived earlier in the Zone. Let's get into it.

So what is this new quest?
Strange things start happening near the Malachite on the Western part of the map. The research groups sent by scientists were reporting a signal interfering with their usual PDA communications. After listening to it, stalkers were suffering from headaches, nose bleeds, and hallucinations. Professor Medulin witnessed everything with his own eyes. While being good at scientific research, he didn't know enough about the radio signal and what to do to figure out what was happening. That's where Banzai steps in, a radio enthusiast, who also joined the investigation, but something went wrong. You may hear their emergency transmission almost anywhere in the Zone. Pay close attention to the Red Forest Region on your map, and from here on, your intuition, rationality and choices will lead you down the rabbit hole to see how deep it is.

HDD Prices Soar, Sparking Fears of Incoming Shortage

The good-old "spinning rust"—commonly referred to as Hard Disk Drives (HDDs)—may be another leg in the depleted computing infrastructure caused by the AI boom. According to DigiTimes, contract negotiations for the fourth quarter of 2025 concluded with traditional HDD prices settling about 4% higher quarter-over-quarter, marking the largest rise in the past eight quarters. That is over the largest increase in recent years, indicating that the demand is again outpacing supply even in the slower storage segments like HDD. The massive demand reportedly comes from particularly strong uptake for desktop 3.5-inch drives in China and continued heavy procurement of high-capacity units by major U.S. cloud service providers and hyperscalers.

In China, there is a preference for domestically produced CPUs and operating systems, combined with an increase in local PC assembly, which has brought HDDs back into first-class role in certain PC configurations after years of being replaced by SSDs. Additionally, concerns about SSD data retention have led some customers and policymakers to favor HDDs for specific workloads. Large cloud operators are also expanding their exabyte-class storage for AI, analytics, and archival needs. Manufacturers report that utilization rates are at or near full capacity as demand extends beyond traditional surveillance and backup applications. Especially with AI infrastructure, storing massive data for model training has prompted AI labs to use some HDD-based storage infrastructure where speed isn't needed.

ResumaLive – 2-Minute Intro for Video Creators


ResumaLive is a platform where video creators build swipeable, shareable profiles that introduce them in under 2 minutes.

Clients don't have time to dig through scattered links, they leave before they understand you. ResumaLive guides them through your identity, your credibility, your showreel, your personality, and how to reach you. like a movie trailer for your career. It doesn't replace your portfolio or social media. It gets your foot in the door, then they explore the rest.

View startup

The 'ExtrudeX' machine wants to turn your 3D printing waste into reusable filament, all at home — this Kickstarter project is itself 3D-printable with minimal hardware costs

Your 3D printing costs are about to go way down with the ExtrudeX, an extrusion machine claiming to recycle 3D printing waste into new filament. The project is live on Kickstarter, and should you choose to pledge it, you'll get STL files to 3D print the machine at home yourself, along with a list of minimal hardware required to complete the build.

AI’s Real Boom: Data Centers, ROI, and a Maturing IPO Market

The post AI’s Real Boom: Data Centers, ROI, and a Maturing IPO Market appeared first on StartupHub.ai.

“Every single AI company on the planet is saying if you give me more compute, I can make more revenue.” This assertion by Matt Witheiler, Head of Late-Stage Growth at Wellington Management, cuts directly to the core of the current artificial intelligence boom, framing the debate around an “AI bubble” not as a question of […]

The post AI’s Real Boom: Data Centers, ROI, and a Maturing IPO Market appeared first on StartupHub.ai.

Rockefeller’s Ruchir Sharma Declares AI Market in “Advanced Stages of a Bubble”

The post Rockefeller’s Ruchir Sharma Declares AI Market in “Advanced Stages of a Bubble” appeared first on StartupHub.ai.

The current euphoria surrounding artificial intelligence has propelled the tech sector to unprecedented valuations, prompting seasoned financial analysts to question the sustainability of this growth. Ruchir Sharma, Chairman of Rockefeller International and Founder & CIO of Breakout Capital, offers a sobering perspective, asserting that the market is already in the “advanced stages of a bubble.” […]

The post Rockefeller’s Ruchir Sharma Declares AI Market in “Advanced Stages of a Bubble” appeared first on StartupHub.ai.

Apple Engineers Squeeze Powerhouse Vision Models into a Single Layer for Hyper-Efficient Image Generation

The post Apple Engineers Squeeze Powerhouse Vision Models into a Single Layer for Hyper-Efficient Image Generation appeared first on StartupHub.ai.

Generative AI is getting a major speed and efficiency boost, thanks to a surprisingly simple new framework from Apple researchers. The paper, “One Layer Is Enough: Adapting Pretrained Visual Encoders for Image Generation,” introduces the Feature Auto-Encoder (FAE), a novel approach that dramatically slashes the complexity required to integrate massive, pre-trained visual encoders (like DINOv2 […]

The post Apple Engineers Squeeze Powerhouse Vision Models into a Single Layer for Hyper-Efficient Image Generation appeared first on StartupHub.ai.

Google says don’t use JavaScript to generate a noindex tag in the original page code

Google has updated its JavaScript SEO basics documentation to clarify how Google’s crawler handles noindex tags in pages that use JavaScript. In short, if “you do want the page indexed, don’t use a noindex tag in the original page code,” Google wrote.

What is new. Google updated this section to read:

  • “When Google encounters the noindex tag, it may skip rendering and JavaScript execution, which means using JavaScript to change or remove the robots meta tag from noindex may not work as expected. If you do want the page indexed, don’t use a noindex tag in the original page code.”

In the past, it read:

  • “If Google encounters the noindex tag, it skips rendering and JavaScript execution. Because Google skips your JavaScript in this case, there is no chance to remove the tag from the page. Using JavaScript to change or remove the robots meta tag might not work as expected. Google skips rendering and JavaScript execution if the robots meta tag initially contains noindex. If there is a possibility that you do want the page indexed, don’t use a noindex tag in the original page code.”

Why the change. Google explained, “While Google may be able to render a page that uses JavaScript, the behavior of this is not well defined and might change. If there’s a possibility that you do want the page indexed, don’t use a noindex tag in the original page code.”

Why we care. It may be safer not to use JavaScript for important protocols and blocking of Googlebot or other crawlers. If you want to ensure a search engine does not rank a specific page, make sure not to use JavaScript to execute those directives.

Why share of search matters more than traffic in the AI era

Why share of search matters more than traffic in the AI era

The SEO industry is entering its most turbulent period yet.

Traffic is declining. AI is absorbing informational queries. 

Social platforms now function as search engines. Google is shifting from a gateway to an answer engine.

The result is a sector running in circles – unsure what to measure, what to optimize, or even what SEO is meant to do.

Yet within this turbulence, something clear has emerged.

A single marketing metric that cuts through the noise and signals brand health and future demand. 

A metric that marketers and SEOs can align around with confidence.

That metric is share of search.

Discovery is changing, and measurement must change with it

The old model of being discovered by accident through classic search behavior is disappearing.

AI Overviews answer questions without sending traffic anywhere. 

Meta is already rolling out its own AI to answer user queries. 

TikTok and YouTube continue to grow as product discovery engines. 

It is only a matter of time before LinkedIn becomes a business search engine powered by conversational AI.

We are witnessing a seismic shift. In moments like this, measurement becomes even more important. 

Many SEO metrics are losing meaning, but one is rapidly gaining importance.

What share of search actually measures

Share of search is a metric developed by James Hankins and Les Binet. 

It is calculated by dividing a brand’s search volume by the total search volume for all brands in its category. 

The result shows the proportion of category interest the brand commands.

The value is not in the calculation itself, but in what the metric correlates with.

Studies published by the Institute of Practitioners in Advertising (IPA) show that share of search correlates strongly with market share and future buying behavior. 

As the IPA notes:

  • “Share of search is a leading indicator or predictor of share of market. When share of search goes up, share of market tends to rise. When share of search goes down, share of market falls.”

In simple terms, consumers search for brands they are considering, buying, or using. 

That makes search behavior one of the clearest available signals of real demand.

Share of search was never designed to be perfect. It does not capture every nuance of how people find information across platforms. 

It was built as a practical proxy for brand demand – and right now, practical measurement is exactly what the industry needs.

Dig deeper: Measuring what matters in a post-SEO world

From traffic to demand: Why marketers need a new signal

Traffic as a measurement has become almost meaningless. 

It has been easy to inflate, manipulate, and misunderstand.

Goodhart’s Law explains why. When a measure becomes a target, it stops being a good measure. 

Traffic was treated as a target for years, and as a result, it stopped being a reliable indicator of anything meaningful.

Now traffic is falling – not because brands are doing anything wrong, but because AI is answering questions before users ever reach a website.

Ironically, this makes traffic more meaningful again, as much of the noise that once inflated it is disappearing.

The bigger advantage, however, belongs to share of search. 

It cannot be inflated through content tactics or gamed by chasing trends. It reflects underlying consumer interest.

That is why share of search has become so significant. 

It shows whether a brand is being searched for more or less than its competitors. 

When share of search rises, brand demand is growing. When it falls, demand is weakening.

If an entire category collapses – as it did with air fryers once most consumers had already bought one – the metric also provides a clear signal that demand for the overall market is shrinking.

There is another advantage. Share of search is a multi-platform metric.

A metric that crosses platforms

People no longer search in one place. 

Product searches may begin on Amazon, TikTok, or Facebook. 

Credibility checks often happen on YouTube. Long-form research may still take place on Google.

Discovery is fragmented, and behavior is fluid.

Share of search adapts to this reality. It is platform agnostic. 

You can measure it using Google Trends, Ahrefs, Semrush, My Telescope, or any platform that provides reliable volume estimates. 

You can track demand across Amazon, TikTok, YouTube, and emerging AI search interfaces.

Where the behavior happens matters less than the signal itself. 

If people are looking for your brand, they are demonstrating intent.

This cross-platform visibility is critical because AI search sends little traffic to websites. 

ChatGPT, Claude, and other LLMs present answers, snippets, and summaries, but rarely generate click-through. 

Links are often buried, inaccessible, or accompanied by friction.

Instead, these systems trigger brand search. 

Users encounter a brand in an AI response, then search for it when they want more information.

As a result, share of search becomes the tail-end signal of everything marketing does, including AI exposure. 

When share of search rises, marketing is working. When it falls, it is not.

However, the metric needs a champion.

Get the newsletter search marketers rely on.


A metric SEOs should champion

The SEO industry has spent years focused on two types of keywords: 

  • Non-brand buyer intent.
  • Non-brand informational. 

That approach made sense when classic search was the dominant discovery channel. That world is disappearing.

Yet many SEOs continue to cling to outdated deliverables, such as structured data micro-optimization or churning out endless blog posts to influence hypothetical AI citations.

Citations are a distraction. 

At best, they are a minor signal in LLM outputs. 

At worst, they are a misleading metric that will not stand up to financial scrutiny. 

When CFOs start questioning the value of SEO budgets, citations will not hold up as evidence of ROI.

Share of search will.

SEOs who embrace share of search position themselves not as keyword tacticians, but as strategic insights partners. 

They become interpreters of demand who help:

  • CMOs understand whether brand marketing is breaking through.
  • Leadership teams see where consumer interest is rising or falling.

This shift changes the role of SEO entirely. 

Instead of being judged by how much content they produce, SEOs begin to be valued for how well they understand search behavior and the commercial impact of that behavior.

A well-structured share of search report tells a coherent story:

  • Is the brand being searched for more this quarter?
  • Are competitors gaining ground?
  • Is the category contracting?
  • Did a recent PR campaign increase branded search?
  • Did a product launch move the needle?

In the AI era, this narrative becomes essential. 

Someone inside the organization must understand how people search, where they search, and what the numbers mean.

SEOs are naturally positioned to fill that role. You have the background and the expertise. 

And as AI automates more mechanical SEO tasks, this progression becomes increasingly natural.

Because share of search requires interpretation.

Dig deeper: Why LLM perception drift will be 2026’s key SEO metric

The depth and complexity available

Share of search does not have to be a single top-level number. It can be:

  • Broken down by product line, model, or competitive set. 
  • Segmented into branded and semi-branded queries.
  • Tracked across every channel where search behavior exists.
  • Compared against AI model outputs to understand where visibility aligns or diverges.

Consider the air fryer category. 

Demand collapsed across the market once most consumers had already purchased one. 

Within that collapse, however, individual models rose and fell based on their appeal. 

Ninja’s latest model, for example, showed spikes and dips that revealed shifts in consumer interest long before sales data arrived.

Share of search acts as early detection for market movement.

SEOs who understand this level of nuance become indispensable. They can:

  • Advise whether a category is shrinking or whether a competitor is accelerating. 
  • Identify gaps in PR coverage.
  • Highlight where LLMs reference competitor brands more frequently.
  • Signal when product positioning needs reinforcement.

This is the future skill set – not chasing rankings, but interpreting behavior.

A human role that AI can’t replace

As AI becomes more integrated into search and site optimization, many mechanical SEO tasks will be increasingly automated. 

The interpretation of marketing performance, however, cannot be fully automated.

Share of search requires human judgment. 

It requires an understanding of context, seasonality, category dynamics, and brand strategy. 

That role can and should belong to the SEO professional.

Some agencies may label this function an insights specialist or a data analyst. 

Some organizations may house it within marketing. 

But the people who understand search behavior most deeply are SEOs. 

They are best positioned to interpret what the numbers mean and communicate those insights to leadership teams.

Leadership teams need to understand what is happening with their brand.

The metric that protects brands in the AI era

Marketing leaders are already discussing share of search, and it is beginning to appear in boardroom conversations. 

It is quickly becoming a central indicator of brand strength. 

In an AI-driven world where traffic is scarce and visibility is fragmented, the strategic imperative is clear.

Brands need to be searched for. Those that are searched for endure. Those that are not fade.

That is why share of search is not just another metric. It is becoming the metric. 

SEOs who embrace it can elevate their role, influence, and strategic value at exactly the moment the industry needs it most.

Your next steps 

The advice for SEOs is simple: Learn share of search.

To get started:

  • Learn more about the metric by reading reports and studies.
  • Create your first share of search report.
  • Analyze the drivers of change, such as market shifts or recent PR or TV campaigns.
  • Experiment with search tools to determine which reporting approach works best.
  • Involve other departments. Host a session on share of search and collaborate with PR teams to track activity.

You will not become fluent in the metric without using it. Once you do, its applications become clear.

Share of search is the bridge that connects SEO to the broader world of brand.

Take the first step.

Sapphire wants AMD to let them “go nuts” on their GPUs

Sapphire wants AMD to let them “go nuts” with their GPU designs Ed Crisler, Sapphire Technology’s North American PR Manager, has openly stated that he would like GPU manufacturers to give their partners more freedom when building their graphics cards. Sapphire would like to “go nuts” when building graphics cards, but tight rules limit what […]

The post Sapphire wants AMD to let them “go nuts” on their GPUs appeared first on OC3D.

Acer Launches White Edition Predator BiFrost Radeon RX 9070 XT/9070 Cards

Acer introduced its opening salvo of Radeon RX 9070 XT 16 GB and RX 9070 non-XT 16 GB custom card designs back in March, almost alongside competing options produced by other AMD board partners. Oddly, the Taiwanese computer hardware manufacturer did not end up launching Navi 48 GPU-based products during that period—months later, retail stock started appearing in Japan. Fast-forward to the end of 2025, and Acer seems to be catching up with other AIBs, yet again. Last Friday (December 12), a Japanese press release signalled the arrival of White Edition Predator BiFrost options at retail outlets. The flagship spin-off SKU is available immediately, at 138,800 yen (tax included), converting to ~$896 USD.

A weekend investigation—prompted by GDM/hermitage akihabara's PR release—produced evidence of pale Predator BiFrost Radeon RX 9070 XT OC and RX 9070 OC stock reaching Europe. VideoCardz happened upon highly-detailed entries on the Geizhals price aggregation engine. In Germany, Acer's best RDNA 4 gaming card—now dressed in white—is (appropriately) cheapest at the "NBB - notebooksbilliger" Ebay store. An active voucher promo brings total cost of ownership down to €613.81 (~$722 USD). Alternatively, NBB and Nullprozentshop offer the joint-lowest non-XT White Edition listings: €539.00 (~$634 USD). Product renders show familiar packaging; already present on boxed standard black models. Oddly, the non-XT promotional shots show the deployment of three 8-pin power connectors (instead of the standard two). Normally this triple provision is reserved for top-level XT options, so Acer's graphics department could be using a generic image set for both overclocked models.

Marvell Designs Ultra-Low-Power 2 nm Dense SRAM, Outperforming Industry Standard

Marvell recently showcased its custom silicon IP advancements at the Marvell Analyst Day 2025. The company has developed SRAM IP that surpasses industry standards in terms of power efficiency and density. Initially launched in June, their 2 nm SRAM IP now reveals performance figures that highlight its superiority over standard solutions. In a 256K instance comparison, Marvell reports an 80% reduction in total power consumption, a 37% smaller area, and cycle times that are 22% faster. Additionally, Marvell's memory layout is more rectangular, facilitating easier integration into dense SoCs.

Further comparisons with top alternatives show that Marvell's custom SRAM uses 50% less area at the same bandwidth, reduces standby power by 66%, and delivers 17 times more bandwidth per mm² when normalized by area. These improvements are attributed to redesigned clocking and port structures that enhance bandwidth from on-die SRAM. Marvell argues that this architectural approach results in significantly higher bandwidth density and lower power consumption compared to standard dense SRAM IP. In an era where logic scaling continues to outpace memory in modern semiconductor nodes, having custom IP that aggressively boosts SRAM density and reduces power usage is a massive advantage.

(PR) HDMI LA, Inc. to Showcase Advanced HDMI Gaming Technologies at CES 2026

HDMI Licensing Administrator, Inc. (HDMI LA) announced that its CES 2026 booth will spotlight HDMI gaming technologies with a range of demonstrations designed to engage both professional attendees and gaming enthusiasts. The booth will feature the performance benefits of three HDMI cable types across interactive gaming demonstrations:
  • Ultra96 HDMI Cable, introduced in the recently released HDMI 2.2 Specification, will be on display with early prototypes
  • Ultra High Speed HDMI Cable
  • Premium High Speed HDMI Cable
These demonstrations will highlight how the right HDMI connectivity drives exceptional performance for next-generation gaming experiences.

(PR) Resident Evil Requiem's Latest Trailer Focuses on Leon S. Kennedy

A new era of survival horror arrives with Resident Evil Requiem, the latest and most immersive entry yet in the iconic Resident Evil series. Experience terrifying survival horror with FBI analyst Grace Ashcroft, and dive into pulse-pounding action with legendary agent Leon S. Kennedy. Both of their journeys and unique gameplay styles intertwine into a heart-stopping, emotional experience that will chill you to your core. Watch the latest heart-stopping trailer (below) for a taste of the horrors that await on February 27, 2026.

Requiem for the dead. Nightmare for the living. Resident Evil Requiem is now available for pre-order! Experience terrifying survival horror with FBI analyst Grace Ashcroft, and dive into pulse-pounding action with legendary agent Leon S. Kennedy. Both of their journeys and unique gameplay styles intertwine into a heart-stopping, emotional experience that will chill you to your core. A new era of survival horror begins when Resident Evil Requiem launches February 27, 2026 for PlayStation 5, Xbox Series X|S, Nintendo Switch 2, Steam, and Epic Games Store.

HKC Unveils M10 Ultra: The World's First RGB Mini-LED Monitor

At the mid-way point of last week, HKC promo material teased an imminent unveiling of the world's first RGB Mini-LED gaming monitor. Days later—as promised—the Chinese semiconductor display specialist delivered on this promised. Their upcoming cutting-edge product boasts "simultaneous light and color control" capabilities, enabled by an RGB Mini-LED backlight system. By comparison, normal (white/blue) Mini-LED only delivers "single light control." So far, only well-heeled customers/business owners have enjoyed this richer color experience, via extremely large format televisions, courtesy of Hisense and Samsung. According to a domestic press release, HKC's "color focused" M10 Ultra is capable of "reshaping light," thanks to the integrated backlight array's 4788 independent control zones. The company claims that its latest design largely eliminates the halo effect/phenomenon that is a byproduct of Mini-LED backlight tech—enabled through a micron-level "light and color control" matrix and independent R-G-B channel manipulation algorithms.

The M10 Ultra's recent introduction did not extend into a retail release, but a launch—at least in China—could happen within the first quarter of 2026. HKC reckons that their forthcoming 32-inch 4K offering reaches professional-grade standards with a 100% coverage of the BT.2020 color gamut (also sRGB, DCI-P3, and Adobe RGB). According to an initial specification sheet, the manufacturer's latest 31.4-inch panel can support dual-mode functionality—primarily 3840 x 2160 resolution, at a 165 Hz native refresh rate. A competitive gaming-oriented mode boosts the refresh rate up to 330 Hz, with screen resolution reduced to 1080p. The M10 Ultra seems to be up-to-date in terms of high-bandwidth connectivity: DisplayPort 2.1 is present on the spec sheet. Curiously, HKC did not specify whether this model is an IPS or VA-type monitor.

(PR) NVIDIA Debuts Nemotron 3 Family of Open Models

NVIDIA today announced the NVIDIA Nemotron 3 family of open models, data and libraries designed to power transparent, efficient and specialized agentic AI development across industries. The Nemotron 3 models—with Nano, Super and Ultra sizes—introduce a breakthrough hybrid latent mixture-of-experts (MoE) architecture that helps developers build and deploy reliable multi-agent systems at scale.

As organizations shift from single-model chatbots to collaborative multi-agent AI systems, developers face mounting challenges, including communication overhead, context drift and high inference costs. In addition, developers require transparency to trust the models that will automate their complex workflows. Nemotron 3 directly addresses these challenges, delivering the performance and openness customers need to build specialized, agentic AI.

Content, Consolidation, and the Creator’s Cut: Isaacson on AI, Media Mergers, and Musk

The post Content, Consolidation, and the Creator’s Cut: Isaacson on AI, Media Mergers, and Musk appeared first on StartupHub.ai.

“People who create content should be part of the party when the proceeds get divided up,” asserts Walter Isaacson, the esteemed biographer and advisory partner at Perella Weinberg, during a recent appearance on CNBC’s ‘Squawk Box.’ This fundamental principle, he argues, is the linchpin for navigating the burgeoning age of artificial intelligence, particularly as major […]

The post Content, Consolidation, and the Creator’s Cut: Isaacson on AI, Media Mergers, and Musk appeared first on StartupHub.ai.

FreePBX Patches Critical SQLi, File-Upload, and AUTHTYPE Bypass Flaws Enabling RCE

Multiple security vulnerabilities have been disclosed in the open-source private branch exchange (PBX) platform FreePBX, including a critical flaw that could result in an authentication bypass under certain configurations. The shortcomings, discovered by Horizon3.ai and reported to the project maintainers on September 15, 2025, are listed below - CVE-2025-61675 (CVSS score: 8.6) - Numerous

Why click-based attribution shouldn’t anchor executive dashboards

Why click-based attribution shouldn’t anchor executive dashboards

As marketing channels and touchpoints multiply rapidly, the way success is measured significantly impacts long-term growth and executive perception. 

Click-based attribution – across models like last-click, first-click, linear, and time-decay – remains the default. 

But as a standalone measurement strategy, it’s showing its age. 

Click metrics now carry disproportionate weight in executive dashboards, and that reliance introduces real limitations.

Click-based models can still reveal valuable insights into digital engagement. 

However, when the C-suite bases major budget and strategy decisions solely on clicks, they risk overlooking critical aspects of the customer journey – often the very pieces that matter most.

This article examines:

  • What click-based attribution actually captures.
  • Where click-based measurement breaks down in a multi-channel, multi-device, privacy-first world.
  • The business risks of over-indexing on click metrics.
  • Measurement approaches that better align marketing with real business outcomes.
  • How marketing leaders can guide executives toward more holistic, outcome-oriented frameworks.

The goal isn’t to demonize clicks – they still belong in the toolbox. But they should provide context, not serve as the foundation.

What does click-based attribution actually measure?

Click-based attribution tracks ad clicks and assigns conversion credit to the marketing touchpoints that drove them. 

Models like first-click, last-click, linear, time-decay, and data-driven approaches differ only in how they split that credit across the user journey.

Digital ad platforms and many analytics tools default to click-based models because clicks are relatively easy to capture, understand, and report. 

They’re deterministic, clean, and simple to interpret at a glance.

That cleanliness, however, can be misleading. 

Click-based attribution depends entirely on a user interacting with tracking links or tags. 

If a user doesn’t click, or clicks but converts later or elsewhere, the touchpoint may be missed or misattributed.

This approach can work in a simple, linear funnel. 

But as customer journeys become multi-device, multi-channel, and increasingly offline, clicks lose context quickly.

Dig deeper: The end of easy PPC attribution – and what to do next

The problems with solely relying on click-based attribution

Clicks don’t represent real customer behavior

Today’s buyers rarely follow the neat, linear paths that click-based models assume. 

Instead, they move across devices, channels, and even offline touchpoints.

Think social media, LLMs like ChatGPT, and brand exposure from video, influencers, or website content. 

Many of these interactions never generate a tracked click, yet they play a critical role in shaping perception, intent, and eventual conversion.

For example, a buyer may watch a brand’s video on LinkedIn during their morning commute. 

Later, they read a third-party review and skim a few case studies on the brand’s website.

Days later, they type the brand name directly into Google and convert. 

In a click-based model, only the final branded search click receives credit. 

The video, the review, and the content that built trust remain invisible.

These aren’t minor attribution blind spots – they represent a canyon. 

Click-based measurement skews too much toward lower-funnel performance

Click-based models place the most weight on the final click. 

As a result, they often over-index lower-funnel activity from channels like retargeting ads or branded search. 

These channels convert more frequently, but they do not create demand on their own.

For C-level decision-makers, this creates a dangerous bias. 

Dashboards light up for retargeting campaigns and branded search, so budgets flow there.

Mid- and upper-funnel investments – brand building, awareness, content, and influencers – are reduced or cut. 

Over time, the brand’s long-term growth engine is choked in favor of short-term, easily quantifiable wins.

Dig deeper: Marketing attribution models: The pros and cons

Click-based models undervalue creative, messaging, and brand

Not all marketing impact shows up as clicks. 

A video ad or thought-leadership piece may plant a seed without prompting an immediate click, yet the message can linger. 

It may lead to later brand searches or site visits, outcomes that are difficult to capture through click-based measurement.

As a result, brand power, creative messaging, and top-of-funnel reach are underrepresented in click-based models. 

Over time, organizations that optimize solely around click-based attribution may unintentionally deprioritize creativity, brand-building, and long-term equity.

Click-based attribution breaks down in a privacy-first world

We’re moving toward a future where third-party cookies are diminished or gone, privacy rules continue to tighten, and tracking becomes less precise. 

Under these conditions, click tracking grows more difficult, less reliable, and increasingly misaligned.

Without stable identifiers, many of the assumptions behind click-based models – “this click belongs to that user” or “this click led to that conversion” – begin to unravel. 

Attribution becomes a house of cards built on data that may not hold up as privacy and tracking norms continue to shift.

The business risks of over-relying on click-based attribution

Misallocation of budgets

When click-based reporting dominates, budgets tend to flow toward what looks good – the activities that drive visible revenue and deliver clean, direct ROI. 

That often comes at the expense of demand generation efforts that support long-term growth, such as brand campaigns, content, awareness, and other upper-funnel media.

This approach may “work” for a few months or even years. 

Over time, however, the pipeline dries up. 

Awareness declines, organic reach stagnates, and the brand loses its ability to attract new audiences at scale.

Erosion of brand over time

Marketing shifts into a zero-sum exercise focused on extracting conversions from existing demand rather than expanding it. 

Without sustained investment in brand equity and demand generation, competitiveness, brand loyalty, and lifetime value (LTV) suffer.

In essence, optimizing for short-term ROAS puts long-term brand health at risk.

Misaligned incentives across teams

When KPIs are click-based:

  • Media teams optimize for clicks.
  • Creative teams optimize for click-worthy content.
  • Analytics teams optimize for attribution that ties cleanly to conversions. 

The result is marketing silos working toward different objectives.

  • Media buys may undermine creative performance. 
  • Creative teams may chase cheap clicks. 
  • Analytics may mask cannibalization rather than reveal incrementality. 

Fragmentation increases.

Blind trust in platform-reported metrics

Ad platforms and tracking tools report click-based conversions, but many of those conversions are self-crediting, particularly within paid media platforms. 

When you rely heavily on these numbers without scrutiny or connection to the broader user journey, you risk making high-stakes decisions based on biased data.

Get the newsletter search marketers rely on.


What to use instead of click-based attribution

If click-based attribution is flawed, how should performance be evaluated? 

The short answer is a combination of approaches grounded in real business outcomes.

Marketing mix modeling (MMM) for channel-level contribution

At a higher level – especially when multiple channels are involved, including online, offline, paid media, organic media, and PR – MMM helps quantify channel-level contribution to sales, revenue, or other business outcomes. 

It looks at broad correlations over time using aggregated data rather than user-level clicks.

MMM, supported by machine learning, improved data resolution, and more frequent refresh cycles, has become more accessible and actionable. 

It isn’t a replacement for click- or site-based data, but a powerful complement. 

Dig deeper: MTA vs. MMM: Which marketing attribution model is right for you?

Multi-touch attribution (MTA), used thoughtfully 

User-level path analysis still has a place when privacy and tracking allow. 

Multi-touch models that consider multiple touchpoints can provide richer insight, but they work best as one input among many rather than a single source of truth. 

They offer path visibility, but without incrementality testing or support from MMM, they still risk over-crediting and bias.

Customer lifecycle metrics: LTV and CAC payback, retention, cohort analysis

Marketing value isn’t confined to a single sale or conversion.

LTV, retention, and long-term value creation matter just as much. 

Tying spend to CAC payback, churn, loyalty, and retention creates a measurement framework aligned with long-term business goals.

Incrementality testing as a standard practice

Incrementality testing measures what marketing actually adds by identifying net-new conversions, revenue, lift, or awareness. 

It separates what would have happened anyway from what your efforts truly drove.

This approach isn’t as clean as click tracking and requires more planning and discipline, but it delivers causality. 

It allows you to say, with confidence, “This spend generated X% incremental lift.”

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

Attention metrics, quality signals, and creative impact

Not all impact is transactional. 

Upper-funnel signals such as viewability, time-in-view, attention scores, and engagement matter. 

Creative resonance, brand recall, and impact often influence later behavior that never appears as a click.

Looking beyond clicks to metrics like creative recall, brand lift, share of voice, sentiment, and qualitative feedback helps anchor measurement to real brand value and audience expectations.

Building a modern measurement framework

A modern measurement framework isn’t built around one model or metric. 

It brings together complementary methods to create a clearer, more balanced view of performance.

Take a portfolio approach

The most effective measurement frameworks take a portfolio approach. 

MMM, incrementality, multi-touch attribution (when possible), attention metrics, and customer lifecycle metrics work together to triangulate performance from multiple perspectives.

This diversity reduces bias and balances short-term performance with long-term brand health.

It also makes it possible for the C-suite to see more than conversions alone – including impact, growth potential, and sustainable value.

KPIs that reflect real business impact

Executives care about revenue, margin, and growth. Not just clicks. 

Reframe KPIs around the key metrics that matter, such as:

  • Revenue.
  • Cost per acquisition.
  • Customer lifetime value.
  • Retention.
  • Brand lift.
  • Market share.
  • Brand sentiment.

Package those into dashboards that tell a story: 

  • “Here’s what we did, here’s what grew, here’s what we learned, here’s where we go next.”

Build executive dashboards for outcomes, not vanity metrics

When dashboards lead with vanity metrics like click volume, CTR, or raw conversion rate, insight is limited. Lead instead with business outcomes.

Build narrative-driven dashboards that connect investment to results, learning, and action.

Lean toward data storytelling instead of data reporting. 

That story resonates with executives. It links marketing to business value, not just to marketing activity.

Leverage AI, predictive modeling, and forecasting strategically 

Modern analytics tools – including AI and predictive forecasting – can help:

  • Estimate demand.
  • Forecast impact.
  • Model how different investments may play out over time. 

Use them to simulate scenarios, test assumptions, and support business cases.

These tools aren’t silver bullets. They work best as accelerators for sound strategic thinking. 

Moving away from click-based thinking

Changing how performance is measured doesn’t happen automatically.

It requires clear framing, evidence, and a deliberate transition rather than an abrupt overhaul.

Understand common objections and address them clearly

Often, executives cling to click-based metrics because they’re easy to understand (“one user clicked, we got a sale”) and seemingly real-time. 

They want fast feedback and accountability. Demand creation efforts often feel abstract and hard to justify.

Be prepared to address that directly:

  • “Clicks are easy to understand.”
    • Yes. But they paint an incomplete picture. Show them what they miss.
  • “We need real-time metrics to manage marketing spend.”
    • That’s valid. But real-time doesn’t always equal real value. Complement with more holistic time-based analyses based on the timing of your sales cycle, incremental lift tests, and periodic MMM to ground real-time decisions.
  • “Brand/awareness spend is hard to justify.”
    • I hear you. That’s why you start small. Run test campaigns, measure impact via lift studies, attribution-aware conversion, and lifecycle metrics. Show proof-of-concept.

Implement a gradual shift, don’t overhaul overnight

Click-based attribution doesn’t need to be discarded overnight. Instead:

  • Introduce incrementality testing for a small portion of spend to show what budget really contributes.
  • Once incrementality proves meaningful lift, allocate more budget toward long-term demand creation efforts.
  • Run or commission MMM annually (or semi-annually) to quantify channel contribution holistically.
  • Adjust executive dashboards to reflect new KPIs, such as revenue, CAC payback, brand lift, and LTV, and reduce emphasis on mere clicks or last-click conversions.

Over time, incentives begin to shift. Media moves beyond clicks, creative focuses on quality and resonance, and analytics emphasizes causality and long-term value.

Educate the executive team

Executives rarely object to logic – they object to noise. 

Frame your case with clarity and use data. 

Show examples, run tests, show incremental lift, and then build dashboards that tell a clear story.

Once you prove that a dollar invested in brand or top-of-funnel media delivers compounding value over time, leadership hopefully becomes less attracted to short-term click metrics. 

They begin to appreciate marketing as an investment, not a cost center.

Clicks are part of the story, not the whole story

Click-based attribution has served marketing teams for years. It offered a clean way to connect conversions to touchpoints. 

But the landscape has changed. 

  • User journeys are longer and messier. 
  • Privacy constraints are tighter. 
  • Long-term brand value now matters as much as short-term conversions.

For C-level teams, judging performance by clicks alone is like judging a company’s health by heart rate alone. It’s useful, but incomplete.

Modern marketing requires a richer view – one that blends data, causality, business outcomes, and long-term brand building.

As marketing leaders, our job isn’t to chase the next click. 

It’s to build brands that last, drive sustained growth, and help leadership see marketing not as a cost, but as a strategic investment.

How to build an effective content strategy for 2026

How to build an effective content strategy for 2026

Every week, new data highlights both the overlap and the divergence between effective organic search techniques across traditional SEO (Google SERPs) and GEO (ChatGPT, AI Overviews, Perplexity, etc.). 

It’s a lot to absorb. One week, headlines say traditional SEO tactics work fine for ChatGPT.

The next, you’ll see reports that one platform is elevating Reddit while another is dialing it back.

Given how quickly this landscape shifts, I want to break down the approach, process, and resources my team is using to tackle content in 2026. 

This goes far beyond a content calendar. 

It’s about combining audience understanding, the interplay of organic platforms, and your brand’s perspective to build a content system that delivers real value.

The right approach for valuable content

The emphasis on quality and value in content is good for marketers.

The tenets of E-E-A-T remain central to our approach because they apply to AI search discoverability as much as to traditional SEO. 

Producing strong content still depends on a rich understanding of your audience, good fundamental structures, and solid delivery methods – skills that always matter.

Start with your audience. 

  • Who are they? 
  • What do they need? 
  • What content will help them get there? 

Approach content like any other product or service: 

  • Identify a need and address it.
  • Understand the emotions involved.
  • Show your credentials – including third-party brand mentions, which are a leading factor in AI search visibility. 

Approach content like any other product or service:

  • Find or understand a need and address it.
  • Know the emotions (i.e., fear, uncertainty, urgency) in play.
  • Show your credentials (in the form of authority, expressed in part by third-party brand mentions that are one of the leading factors of AI search visibility traction).

That said, content that has performed well in Google may not work as effectively for LLM search. 

Instead of writing primarily for blue-link SERPs, we now focus on creating content that stands on its own as an authoritative, structured data source, with trust and originality as ranking signals. 

That means prioritizing clarity, factual depth, and a consistent brand perspective that AI models can reliably quote.

In an age of mass AI content, original insights, data, and human perspective are key differentiators, so content systems should include a step for “original proof” – data, interviews, or commentary that make the material uniquely trustworthy.

We’re also thinking more about how content gets used in AI experiences, not just how it’s found. 

Summaries, bullet points, and explainers that answer layered intent are increasingly valuable. 

Incorporating schema, structured data, and a consistent brand voice improves how AI systems read and represent your content. 

In short, the goal is to optimize for retrievability and credibility, not just ranking.

Get the newsletter search marketers rely on.


Building a process to create valuable content

The content strategy path I like to prescribe is as follows:

  • Problem aware: Empathize with your audience by articulating their problem in a clear, differentiated way.
  • Solution aware: Present your audience with objective, detailed, valuable options for solutions to their problem.
  • Brand aware: Develop your brand’s association as a trusted solution provider.
  • Product aware: Position your specific product or service as the ideal solution for the reader’s problem.

Once your research is conducted, you’ll have what you need to craft content and deploy it in multiple ways. 

The linear workflow that persisted for years in traditional SEO, however, must evolve into a modular content engine – one where a single research output fuels multiple media types (articles, YouTube scripts, short-form video, LinkedIn posts, etc.), with platform-native variations all aligned to a central narrative theme.

Resources to use in content development

A few years ago, I would have started with well-known, well-established tools like Ahrefs and Semrush. 

While those remain useful for benchmarking, they no longer represent how people discover or consume information as AI search transforms user behavior in real time. 

AI search abstracts away keywords – users are asking multi-intent questions, and LLMs are generating synthesized answers. 

SEO analysis is now, rather than the main starting point, one piece of the research pie. 

It’s still important, but search optimization is now embedded throughout the content process.

The tools below have been important in the past, and my team still leans on them as part of a more holistic approach to content planning.

Qualitative interviews

Surveys are useful but can be expensive when you’re trying to reach audiences outside your CRM. 

You can still get strong insights by engaging subject matter experts who share the same professional experiences, challenges, and responsibilities as your target audience. 

Slack communities, live or virtual meet-ups, and memberships in organizations like the AMA or ANA can all offer on-the-ground perspectives that support your content mapping.

Audience analysis from AI systems 

It’s critical to include intent analysis from AI tools and conversational search data. 

Understanding how users phrase questions to AI systems can inform structure and tone.

Social media

Not all social media posts are created equal, but understanding your audience includes knowing where your audience likes to engage: X, Reddit, YouTube, TikTok, etc. (Not to mention that Reddit citations show up prominently in ChatGPT results.)

Utilize these platforms to gather real-time information on what your audience is discussing and to increase brand mentions, which will send strong signals to ChatGPT and similar tools.

Competitor analysis

Shift from tracking keyword overlap to evaluating content depth, originality, and entity coverage – where your brand’s expertise can fill gaps or improve on generic AI-summarized answers.

Adjust the KPIs to assess the impact of your content

For many years, SEO marketers focused on impressions and clicks, although more advanced practitioners also incorporated down-funnel metrics, such as leads, conversions, pipeline impact, and revenue. 

Today, SEOs must expand their KPIs to include brand mentions in:

  • AI summaries.
  • Content-assisted conversions.
  • Cross-channel engagement depth. 

These are the new indicators of helpfulness and value.

Resist the urge to rest on your laurels

We’ve seen strong successes with AI search visibility that complement our traditional SEO results, but our understanding of best practices continues to evolve with each new round of aggregated data on AI search results and shifting user behavior.

In short, keep a parallel track of what has worked recently and where the trends are heading, since ChatGPT and its competitors are changing user behavior in real time – and with it, the shape of organic discovery across platforms.

AMD and Samsung Said to Discuss 2 nm Foundry Deal for Future EPYC CPUs

According to a report from Sedaily, cited by TrendForce, Samsung Electronics' Device Solutions (DS) Division is in discussions with AMD over manufacturing its chips on Samsung Foundry's second-generation 2 nm process known as SF2P. Samsung's 2 nm technology is competing directly with TSMC's N2 and Intel's 18A, both using the Gate-All-Around (GAA) transistor architectures. Industry sources say a decision on moving forward with a formal agreement could come as early as January next year, with actual production being more a matter of "when" rather than "if". As part of the talks, Samsung is expected to run AMD designs through a multi-project wafer (MPW) program in the near term. This would allow both companies to evaluate SF2P's performance and yields before committing to volume manufacturing. The chip involved is believed to be AMD's next-generation EPYC "Venice" server CPU, according to Global Economic News. If the MPW results meet AMD's expectations, sources say it could open the door for AMD to adopt a dual-foundry strategy, pairing Samsung with TSMC. Such a move would not be limited to server products, and could eventually extend to future consumer CPUs, including the "Olympic Ridge" Ryzen lineup.

Separately, the report highlights the AI-focused partnership between Samsung and AMD. Despite Samsung's challenges in entering NVIDIA's HBM supply chain, it has secured a strong position with AMD. Samsung is already supplying HBM3E 12-layer memory for AMD's MI350 accelerators and is considered well positioned for HBM4, which is expected to debut alongside AMD next-generation MI450 products. From Samsung's perspective, adding AMD as a foundry customer would further support its recent recovery. Sedaily notes that Samsung Foundry has picked up momentum after winning orders from major clients such as Tesla and Apple. At the same time, industry sources point to TSMC capacity constraints and rising wafer prices, those factors making Samsung an alternative.

(PR) NVIDIA H100 GPU Cluster on CoreWeave AI Cloud Platform Breaks Graph500 Run Record

The world's top-performing system for graph processing at scale was built on a commercially available cluster. NVIDIA last month announced a record-breaking benchmark result of 410 trillion traversed edges per second (TEPS), ranking No. 1 on the 31st Graph500 breadth-first search (BFS) list. Performed on an accelerated computing cluster hosted in a CoreWeave data center in Dallas, the winning run used 8,192 NVIDIA H100 GPUs to process a graph with 2.2 trillion vertices and 35 trillion edges. This result is more than double the performance of comparable solutions on the list, including those hosted in national labs.

To put this performance in perspective, say every person on Earth has 150 friends. This would represent 1.2 trillion edges in a graph of social relationships. The level of performance recently achieved by NVIDIA and CoreWeave enables searching through every friend relationship on Earth in just about three milliseconds. Speed at that scale is half the story—the real breakthrough is efficiency. A comparable entry in the top 10 runs of the Graph500 list used about 9,000 nodes, while the winning run from NVIDIA used just over 1,000 nodes, delivering 3x better performance per dollar.

(PR) Attack Shark Unveils V8: A Next-Generation Lightweight Wireless Gaming Mouse

Attack Shark, a gaming peripheral brand specializing in affordable, high-performance mechanical keyboards, gaming mice, and accessories, released the V8, a next-generation lightweight wireless gaming mouse born for eSports. Designed for competitive players seeking faster response, greater stability, and sustained comfort, the V8 introduces breakthrough technology, setting a new benchmark for gaming mice.

Co-designed with the Attack Shark player community, the V8 features a shark-fin wireless receiver with an enhanced antenna for stronger signals and stable transmission. Its unique design ensures low latency even in interference-heavy environments. An LED indicator provides instant visibility of connectivity, polling rate, and battery status, delivering a reliable, uninterrupted wireless experience for competitive play.

AMD's AIB Partner Wants More Design Freedom for Extreme OC GPUs

AMD's AIB partners are speaking up as they express more design freedom for their GPUs. According to Sapphire, AMD-exclusive GPU AIB, they are limited in their ability to modify the card and must adhere to chipmaker's official design rules. However, they now want more freedom to make GPUs focus on overclocking, extremely silent operation, and other traits that PC enthusiasts will appreciate. In a Hardware Unboxed interview with Ed Crisler, PR Manager at Sapphire Technology for North America, the AIB has expressed some frustration over their design rule limits.
Sometimes I really wish the chip makers would get out of the way and let us partners just make our cards. Give us the chip. Give us the RAM. Tell us what we have to provide to make it work with the board. And then let us make the cards. Let us have our fun. Let us go nuts. Let there be real differentiation. Sometimes it feels like this market becomes too too much the same.

Samsung Could Stop SATA SSD Production Amid NAND Flash Shortage

Samsung is reportedly considering halting its consumer SATA III SSD production lines due to a growing shortage of NAND flash in the supply chain. According to Moore's Law is Dead, industry insiders have been discussing Samsung's potential stop of regular SATA III SSD production. Most NAND flash is being allocated to datacenter customers, such as AI labs and hyperscalers, leaving limited supply for the lower-margin consumer market, including PC gamers and tech enthusiasts. However, this likely only impacts regular SATA III SSDs and not Samsung's popular M.2 PCIe NVMe drives, which have been a consumer favorite for years. Additionally, there are reports that Samsung is converting its NAND flash production lines in Pyeongtaek and Hwaseong to focus on DRAM production. The upcoming Pyeongtaek Fab 4 (P4) is also expected to operate as a DRAM-only facility using Samsung's latest 1c process.

Earlier industry sources suggest that Samsung is becoming cautious about the NAND market, while demand for standard DRAM has surged. This shift makes NAND flash a less attractive segment, causing the entire supply chain to slow down as inventories are depleted. The demand for AI infrastructure has rapidly depleted inventory across the supply chain, and NAND flash is no exception. The rapid expansion of AI infrastructure has led to NAND flash shortages, which could persist for years. It was noted that the price of a 1 Terabit TLC NAND increased from $4.80 in July 2025 to $10.70 in November 2025, marking an increase of over 100% in less than six months. Other types of NAND flash, such as MLC and QLC, have also seen their spot prices more than double.

Reinforcement Learning Comes Home: NVIDIA and Unsloth Democratize AI Mastery

The post Reinforcement Learning Comes Home: NVIDIA and Unsloth Democratize AI Mastery appeared first on StartupHub.ai.

Reinforcement Learning, once the exclusive domain of supercomputers and multi-million dollar data centers, has decisively stepped into the realm of local computing. This shift, highlighted in a recent tutorial by Matthew Berman, demonstrates how powerful AI models can now be trained on consumer-grade NVIDIA RTX GPUs using open-source tools like Unsloth, fundamentally democratizing access to […]

The post Reinforcement Learning Comes Home: NVIDIA and Unsloth Democratize AI Mastery appeared first on StartupHub.ai.

Nvidia and AI Stocks Poised for Higher Rerating, Says Fundstrat’s Tom Lee

The post Nvidia and AI Stocks Poised for Higher Rerating, Says Fundstrat’s Tom Lee appeared first on StartupHub.ai.

The prevailing sentiment around artificial intelligence, despite its unprecedented surge, often grapples with questions of sustainability and valuation. Yet, Tom Lee, Fundstrat Global Advisors head of research and Fundstrat Capital CIO, posits a distinctly bullish outlook, arguing that leaders in the AI space, notably Nvidia, are not overvalued but rather poised for a significant upward […]

The post Nvidia and AI Stocks Poised for Higher Rerating, Says Fundstrat’s Tom Lee appeared first on StartupHub.ai.

Swarm Intelligence: Decoding the Power and Perils of Multi-Agent AI

The post Swarm Intelligence: Decoding the Power and Perils of Multi-Agent AI appeared first on StartupHub.ai.

The notion that multiple, specialized AI agents can collectively outperform a single, monolithic system represents a significant shift in artificial intelligence development. Anna Gutowska, an AI Engineer at IBM, articulates this concept with clarity, illustrating how “many simple AI agents, each with a small job, coming together to solve big, complex problems.” This paradigm, known […]

The post Swarm Intelligence: Decoding the Power and Perils of Multi-Agent AI appeared first on StartupHub.ai.

Soverli smartphone OS cracks the mobile sovereignty problem

The post Soverli smartphone OS cracks the mobile sovereignty problem appeared first on StartupHub.ai.

The Soverli smartphone OS enables a fully auditable, isolated operating system to run simultaneously with Android, eliminating the security-usability trade-off.

The post Soverli smartphone OS cracks the mobile sovereignty problem appeared first on StartupHub.ai.

⚡ Weekly Recap: Apple 0-Days, WinRAR Exploit, LastPass Fines, .NET RCE, OAuth Scams & More

If you use a smartphone, browse the web, or unzip files on your computer, you are in the crosshairs this week. Hackers are currently exploiting critical flaws in the daily software we all rely on—and in some cases, they started attacking before a fix was even ready. Below, we list the urgent updates you need to install right now to stop these active threats. ⚡ Threat of the Week Apple and

A Browser Extension Risk Guide After the ShadyPanda Campaign

In early December 2025, security researchers exposed a cybercrime campaign that had quietly hijacked popular Chrome and Edge browser extensions on a massive scale. A threat group dubbed ShadyPanda spent seven years playing the long game, publishing or acquiring harmless extensions, letting them run clean for years to build trust and gain millions of installs, then suddenly flipping them into

Uncontested ads are quietly draining your holiday budget. Here’s how to fight back. by BrandPilot.ai

This season, Google Search and Shopping Ads are expected to surge past $70 billion in holiday spending. But there’s a hidden flaw in the auction system — one most advertisers don’t realize is costing them money even when competitors aren’t in the game.

BrandPilot calls this the Uncontested Google Ads Problem, and it’s becoming one of the most overlooked sources of wasted ad spend in peak retail season.

During SMX Next, John Beresford, Chief Revenue Officer at BrandPilot, unpacked how a little-known behavioral quirk in Google’s auction logic can cause advertisers to overspend on their own brand terms, their Shopping placements, and even their category keywords — simply because Google doesn’t automatically reduce your CPC when competition disappears.

Instead of paying less when you’re the only bidder, you may be paying the same high rate you’d pay when rivals are active… without realizing it.

It’s a phenomenon happening thousands of times a day across major brands, and many marketers never notice it’s occurring.

In his session, Beresford discussed:

  • Why “competition gaps” happen far more often than advertisers think.
  • How uncontested moments distort CPCs, even on brand keywords.
  • What real-time auction visibility makes possible — and why AI is changing the game.

He also shared examples of how advertisers are reclaiming wasted spend and reinvesting it into growth – without sacrificing impression share, traffic, or revenue.

Watch BrandPilot’s session now (for free, no registration required) to learn how to:

  • Pinpoint why your CPCs are being artificially inflated when competitors are absent.
  • Estimate the true financial impact of the Uncontested Ads Problem across your annual budget.
  • Implement AI-driven bidding and suppression strategies that prevent self-bidding and boost ROAS.

If you’re running Google Search or Shopping campaigns this holiday season, you can’t afford to miss this session. Learn how to stop the Google Grinch from stealing your budget — and start turning those savings into real performance gains.

MSI MEG X870E GODLIKE X Edition Starts Selling at $1,300

MSI started selling its flagship Socket AM5 motherboard, the MEG X870E GODLIKE X Edition, at an eye-watering $1,300 in the US and approximately €1,200 in the EU (EUR price includes VAT). The board was announced way back in August, and is only now making its way to retail. Various tech publications including TechPowerUp posted reviews of the board over the weekend. Our review can be read here. This board is the absolute pinnacle of connectivity on the AMD platform, providing connectivity and overclocking capabilities that very few other boards offer. Its closes competitors would be the ROG Crosshair X870E Extreme, and the GIGABYTE X870E AORUS Extreme AI Top. Its truckload of connectivity options include 40 Gbps USB4, thirteen 10 Gbps USB 3.2 Gen 2 type-A and type-C ports; 10 GbE and 5 GbE wired LAN, Wi-Fi 7, and a very expensive onboard audio solution. More than the actual board, MSI includes an AIC-based M.2 riser, a fan management module, at least half a dozen M.2 slots on board, and a powerful VRM solution to power current and future processors.

Be sure to check out the TechPowerUp review of the MSI MEG X870E GODLIKE X Edition motherboard.

(PR) Palit Announces New Borderlands 4 Themed Mods and Giveaway

Palit today announced the Palit Maker Borderlands 4 themed mod giveaway. The three custom Borderlands 4 GeForce RTX 50 GPU mods were realized using the creative freedom of Palit Maker support. Comment under the Instagram post of the mod you like the most and get the chance to win amazing prizes! Palit Maker is our original concept for gamers seeking a unique build. It allows you to create a personalized graphics card without affecting the warranty. Download the 3D files from our website and start creating today.

Phantom Stealer Spread by ISO Phishing Emails Hitting Russian Finance Sector

Cybersecurity researchers have disclosed details of an active phishing campaign that's targeting a wide range of sectors in Russia with phishing emails that deliver Phantom Stealer via malicious ISO optical disc images. The activity, codenamed Operation MoneyMount-ISO by Seqrite Labs, has primarily singled out finance and accounting entities, with those in the procurement, legal, payroll

Yono Rewards – Get rewarded for scrolling less


Yono rewards you to scroll less. Stay under 1 hour on Instagram and TikTok per day -> earn 5 points -> stack points -> redeem them at local spots you actually want to visit. 30 points = free specialty coffee, or two Guinness pints, or dessert. The less you doomscroll, the more you earn. Businesses get free foot traffic—they set their offers, pay nothing, and 76% of users buy extra items and return without promotions. You turn phone addiction into real money: scroll less, earn rewards, support local spots, and afford to go out again.

View startup

VolkLocker Ransomware Exposed by Hard-Coded Master Key Allowing Free Decryption

The pro-Russian hacktivist group known as CyberVolk (aka GLORIAMIST) has resurfaced with a new ransomware-as-a-service (RaaS) offering called VolkLocker that suffers from implementation lapses in test artifacts, allowing users to decrypt files without paying an extortion fee. According to SentinelOne, VolkLocker (aka CyberVolk 2.x) emerged in August 2025 and is capable of targeting both Windows

CodersNote – Your AI mentor for personalized programming learning


CodersNote is an AI-powered learning platform that personalizes how students and professionals learn programming. It creates customized roadmaps, courses, and projects based on each learner’s goals—whether it’s frontend development, AI, data science, or any other tech career path.

The prebuild roadmaps and 3,000+ free learning resources available on the platform are carefully chosen and designed by 14 industry experts, ensuring learners follow the most relevant and up-to-date path in tech. It also provides real-time guidance, interview preparation, and 24/7 AI support—helping learners build practical skills, stay motivated, and achieve faster career growth in technology.

View startup

Nvidia plots return to FP64 with next-gen HPC ships

Nvidia plans FP64 comeback following Blackwell’s disappointing HPC results While Nvidia Blackwell series GPUs have proven highly popular for AI workloads, the supercomputing community is disappointed by their lack of FP64 (double-precision) performance. In fact, Nvidia’s new B300 “Blackwell Ultra” chip practically ignores FP64 with its anaemic 1.2 teraflops of performance. For context, Nvidia’s A100 […]

The post Nvidia plots return to FP64 with next-gen HPC ships appeared first on OC3D.

MSI launches its MEG X870E GODLIKE X Edition motherboard

MSI celebrates 10 years of GODLIKE performance with its MEG X870E GODLIKE X Edition motherboard MSI has just launched their new MEG X870E GODLIKE X Edition motherboard, a new AM5 flagship motherboard that’s limited to 1000 units. This motherboard is both a high-end motherboard and a collector’s item, boasting high-end specifications and unique numbering. We’ve […]

The post MSI launches its MEG X870E GODLIKE X Edition motherboard appeared first on OC3D.

GigLegal – For everything freelance; Contracts, escrow, and our Get Paid Guarantee


GigLegal is dedicated to empowering New York City's independent workforce. We aim to provide affordable, accessible, and easy-to-understand legal tools, ensuring every freelancer can operate with the confidence and security of a large corporation. We believe legal protection should be a standard, accessible tool, not an expensive barrier.

The biggest challenges in freelancing are non-payment and scope creep. GigLegal is designed to solve these problems at their root. We combine legally-sound contracts with a secure escrow service, and create a 'trust protocol' that aligns expectations for both freelancers and their clients from day one.

View startup

This bundle is a smart way to dodge rising RAM prices, provides nearly enough for entire build for $300 – 32GB of DDR4 memory, B550 motherboard, 360mm AIO, and a case

If you’re willing to use an older AM4 platform, this Newegg deal pairs a B550 motherboard with 32GB of DDR4 RAM, a full-size liquid cooler, and a case, offering a strong foundation for a Ryzen 5000 gaming PC at a reduced cost.

The Future of Code: From Syntax to AI-Guided Vibe Engineering

The post The Future of Code: From Syntax to AI-Guided Vibe Engineering appeared first on StartupHub.ai.

The advent of large language models is fundamentally reshaping the very act of software creation, moving developers from the meticulous crafting of syntax to a more abstract, intent-driven collaboration with artificial intelligence. This profound shift was the central thesis of Kitze, founder of Sizzy, in his recent discourse on the evolution from “vibe coding” to […]

The post The Future of Code: From Syntax to AI-Guided Vibe Engineering appeared first on StartupHub.ai.

Microsoft Copilot AI Comes to LG TVs, and Can't Be Deleted

Microsoft's Copilot AI chatbot is arguably one of the most controversial add-ons ever implemented in the Windows 11 operating system. However, the controversy doesn't stop at PC operating systems. It seems to extend to TVs as well. According to Reddit user u/defjam16, his LG TV webOS received an update that installed Microsoft's Copilot AI app, with no option to remove it. Although users can choose to ignore it, the push for increased AI integration in everyday products is becoming unavoidable, even on TVs. What exactly can a Copilot AI app do in your TV? We don't know either.

Microsoft is likely promoting its Copilot on TVs to capture more of the AI app market, aiming to become the go-to platform for AI inquiries. Since webOS is a Linux-based TV operating system LG uses, it is also possible that Microsoft is preparing the Copilot AI app for a wider rollout to Linux users, who are now officially commanding a 3% market share among PCs. Other TV operating system platforms are also at "risk" of getting a dedicated Microsoft Copilot AI app, which is especially bad for people not wanting their TV to do any AI processing.

Unreleased RTX Titan Ada prototype gets taken apart to reveal complex internal design and assembly — Nvidia's mythical GPU is engineered to the max with dual 12VHPWR connectors

Following prior benchmarks, the man himself, der8auer, has disassembled the RTX Titan Ada prototype he's had for months. Inside, the card is a maze of wires, connections, and side-plate wizardry, holding together a beastly quad-slot design that had not one but two 12VHPWR (600W) connectors and a side-mounted PCB.

The Generative AI Threat is Already in Your Browser: Malicious Chrome Extensions Explode in Latest Cyber Scourge

The post The Generative AI Threat is Already in Your Browser: Malicious Chrome Extensions Explode in Latest Cyber Scourge appeared first on StartupHub.ai.

The rush to integrate Generative AI into daily workflows has opened a dangerous new front in the cyber security war, one that’s hiding in plain sight: the humble browser extension. New research from Palo Alto Networks security experts, Shresta Seetharam, Mohamed Nabeel, and William Melicher, reveals a disturbing trend of malicious GenAI-themed Chrome extensions being […]

The post The Generative AI Threat is Already in Your Browser: Malicious Chrome Extensions Explode in Latest Cyber Scourge appeared first on StartupHub.ai.

Thevenin – The power of an enterprise grade platform without the overhead


Thevenin is an Internal Development Platform (IDP) designed for organizations that need to build with startup speed while maintaining rigorous enterprise standards regarding governance, security, and data sovereignty.

Developers can easily create docker containers and attach files, variables and even volumes. Organizations can check what has been changed and who did it through version control and limit cloud resource usage by environment.

View startup

Decipher – Upload, Optimize, Sell


Upload 100s of product images. Get SEO titles, descriptions & tags automatically. Sync directly to your WooCommerce or Shopify store. Go from photos to profitable listings in minutes.

Create Topical Authority via our maps feature, which plans and visualizes your content in a way never seen before. Get Keyword Research on the map directly with out competitor analysis tool, revolutionizing the way keyword research is done. No more second guessing which keywords are actually being used and how.

View startup

Dev hacks Xiaomi's Smart Humidifier to free it from the cloud, now works with Home Assistant locally — custom firmware allows the product to evade planned obsolescence

If the one thing about smart home appliances that deterred you from ever investing in this connected future was how your data was always being routed through servers, you're in for a treat. A skilled developer has hacked his new Xiaomi Humidifier with ESPHome, making the device compatible with Home Assistant.

❌