Your car with Google built-in is about to get smarter, thanks to Gemini
Thanks to deep integrations with both your vehicle and your apps, Gemini in cars with Google built-in will help drivers do more safely while still focusing on the road.
Thanks to deep integrations with both your vehicle and your apps, Gemini in cars with Google built-in will help drivers do more safely while still focusing on the road.
Sundar Pichai is on the cover of Time magazine for its 100 most influential companies issue for 2026. 
Google is filling a key measurement gap between awareness and consideration, giving advertisers a clearer view of how their brand is actually perceived — not just remembered.
What’s new. Google Ads has introduced a new “Association” metric within Brand Lift Studies. Advertisers can define a concept, category or attribute, and Google will ask users a survey-style question: which brands they associate with that specific idea.
How it works. Instead of measuring simple recall, the metric evaluates whether audiences connect your brand to a desired positioning. That could mean “premium,” “sustainable,” or even a product category — offering a more nuanced read on brand perception.

Why we care. Google is giving you a way to measure brand positioning, not just awareness or recall. The new Association metric helps determine whether campaigns are actually shaping how consumers perceive a brand — a critical step between being known and being chosen. It also enables more strategic optimization of creative and messaging, especially for brands trying to own specific attributes or categories.
Between the lines. Brand Lift has traditionally focused on awareness, recall and consideration. Association sits in between, helping advertisers understand whether their messaging is shaping how people think about the brand, not just whether they recognize it.
The catch. There’s still a constraint: advertisers can only select three Brand Lift metrics per study, so adding Association means making trade-offs with existing KPIs.
The bottom line. Association gives advertisers a more strategic lens on brand building — measuring not just visibility, but whether campaigns are landing the intended message.
First seen. This update was first spotted by Google Ads expert, Thomas Eccel who shared the update on LinkedIn.

Reddit is quickly becoming a powerful platform shaping how people discover and perceive brands. As AI search engines increasingly surface Reddit threads and comments, these conversations now influence visibility.
To understand this shift, I analyzed 117 SaaS brands on Reddit. People reveal what they really think there, which doesn’t always match polished marketing.
As communities shape brand perception, Reddit is no longer optional.
Here’s my analysis, plus how you can use Reddit to your advantage.
My analysis of 117 brands across the SaaS industry started with identifying the verticals to address:
From there, I created a Google sheet with the brand names for each vertical. Then, I mapped out the following details for each brand:
Across all 117 brands, I analyzed over 300 Reddit threads, including brand mentions, sentiment, community engagement, and brand participation.
Let’s dive into the key findings.
One thing became clear early on: people respond to people, not corporate brands.
Brands run by moderators who were helpful, honest, and non-promotional were received more favorably than those using a polished, corporate tone. Redditors tended to ignore or downvote obvious marketing copy.
In general, redditors don’t want to be marketed to. They want real opinions and real experiences.
As a result, peer recommendations felt more credible than brand messaging. When redditors asked questions or shared frustrations, the most authentic answers came from other users.
When brands stepped in with scripted or promotional responses, they often struggled to gain traction.
However, when brands answered directly, acknowledged limitations, and used conversational language, responses improved. In some cases, brand moderators even earned upvotes and thanks.
Redditors talk about brands, whether or not they’re present on the platform. In many cases, brands simply aren’t there.
Thirty of the 117 brands I analyzed have no Reddit presence. Another 23 are on Reddit, but their subreddits are abandoned.
In several instances, users asked direct questions like:
They received responses from other redditors sharing experiences, opinions, recommendations, and problems.
When brands aren’t there, the conversation continues without them. Over time, their reputation on Reddit exists outside the brand’s control.
Other negative outcomes can follow. When brands aren’t present, others can take their place.
In one instance, I found a community using a popular brand name that had nothing to do with the brand. This shows how easily brand presence can be shaped or misrepresented.

Redditors are already discussing your brand. The only question is whether you’re part of that conversation.
Reddit is an incredible source of unfiltered customer insights.
If you want to know what drives people away, what people value, and how people compare tools, you’ll find the answers on Reddit.
Here are some ways Reddit helps with customer research.
On Reddit, you’ll find people asking questions and sharing:
Reddit users tend to say exactly what they think. This kind of honesty is hard to find anywhere else.
These insights are critical for improving SaaS products. Traditional feedback methods don’t always capture these comments — but Reddit does.
Your Reddit community is a good place for happy customers to advocate for your brand. For example, this Reddit post by Monday shares a brand ambassador program.

In the comments, some brand advocates share insights into their experience, helping elevate the post.

When discussing some community-led brands, redditors often highlight solutions to problems and help fill brand gaps. For example, I noticed users helped each other with troubleshooting, sharing fixes, and recommending integrations.
In some cases, these communities were almost fully self-sustaining, requiring little brand involvement.
Across the topics I reviewed, redditors often expressed negative sentiment about pricing and suggested alternatives, especially for enterprise SaaS tools.
As a result, SaaS brands are often associated with soaring costs and limited pricing transparency, which can hurt perception. When users highlight competitor features, they surface gaps and alternative tools to consider.
Reddit attracts people who discuss how they use software. In my analysis, I observed that users shared:
These posts and comments give brands insight into real use cases they can use to improve products.
Reddit is no longer a side conversation. It’s where brand perception is shaped in real time.
Across the 117 brands I analyzed, conversations are happening on Reddit — even when the brand isn’t present. Increasingly, those conversations feed into AI search, influencing what people see, trust, and choose.
Smart brands shouldn’t ignore Reddit. They should track mentions, listen closely, show up where it matters, and treat Reddit as both a reputation channel and a product insight engine.
Why does it take Noctua so long to release Chromax Black fans? Noctua is preparing to release its Chromax Black NF-A12x25 G2 fans, which will arrive around 10 months after the release of its standard brown/beige versions. Ahead of this release, Noctua has decided to give its fans a glimpse behind the curtain and explain […]
The post Noctua explains – Why do their Chromax Black fans take so long? appeared first on OC3D.


Another Intel Wildcat Lake CPU arrives on PassMark, showing equivalent performance to its smaller sibling. PassMark Reveals Intel Core 5 330 Delivers 4,215 Points in Single and 14,947 Points in Multi-Core Tests Some of the Intel Wildcat Lake CPUs have now appeared on popular benchmarking platforms like PassMark and Geekbench. We first saw a glimpse of the only 1+4 Core CPU, Core 3 304, on Geekbench, and then the Core 5 320 appeared a few days ago on the popular platform PassMark. We saw the Core 5 320 competing with the Apple A19 Pro in MT, but trailing in single-threaded […]
Read full article at https://wccftech.com/intel-core-5-330-spotted-on-passmark-for-the-first-time/

NVIDIA has revealed the games it plans to add to the GeForce NOW cloud streaming library for the month of May 2026, but what's arguably more significant this month is how NVIDIA is expanding the list of games that are classed as RTX 5080-ready. Beginning today, "across nearly the entire GeForce NOW Ready-to-Play library," players subscribed to GeForce NOW Ultimate can play their games with the power of an RTX 5080 behind them. It's a massive expansion of the list of RTX 5080-ready games, which had previously expanded in trickles of a select few games getting added to GeForce NOW […]
Read full article at https://wccftech.com/nvidia-geforce-now-16-games-may-2026-nearly-entire-ready-to-play-library-rtx-5080-ready/

People find new ways of building PCs, but this one caught our attention as fitting regular-sized components in a CRT chassis is challenging. Redditor Builds a Whole PC Inside a CRT Monitor Using Desktop Parts; Replaces Display With a Laptop Panel and Deploys Several Case Fans Decades-old CRT display just became a fully functional computer, thanks to u/Discipline_Great, who, even though he couldn't revive the old CRT monitor, had some other plans. He says he picked it up from an e-waste, and it was already broken. So, he decided to turn it into a PC building project, which appears challenging, […]
Read full article at https://wccftech.com/modder-turns-crt-monitor-into-a-full-pc-build/

Benchmarks of Intel's upcoming and fastest gaming handheld SoC, the Arc G3 Extreme, have been leaked, surpassing the Ryzen Z2 by 25%. Intel Packs Its Strongest Battlemage GPU, & 14 CPU Cores Inside the Arc G3 Extreme Gaming Handheld SoC We recently covered Intel's first Arc G3 gaming handheld, which has been listed by online retailers. While the retailer listing was void of details for the SoC itself, we now have more specs and even benchmarks of the upcoming chip & they look phenomenal. Starting with the CPU, the Intel Arc G3 Extreme is going to be the top offering […]
Read full article at https://wccftech.com/intels-arc-g3-extreme-handheld-chip-crushes-ryzen-z2-extreme-benchmark-leak/

Yesterday, after reports that it would be shutting down, Greedfall and Steelrising makers Spiders, a studio that had survived in the video game industry for nearly two decades, officially closed its doors after its parent company Nacon failed to find a buyer after Spiders had filed for insolvency. While Nacon itself and three more of its subsidiaries have all filed for insolvency, Nacon continues to truck onward and reveals its Nacon Connect event will indeed return in May 2026, as it previously promised when the event was postponed earlier this year. The showcase event will premiere on May 7, 2026, […]
Read full article at https://wccftech.com/nacon-connect-set-may-2026-amidst-studio-closure-concerns/

Spaiky is like Duolingo for AI learning: it teaches AI literacy through bite-size, gamified lessons on your phone. Explore modules like Mastering AI Prompts and How LLMs Work, and learn in five minutes a day with quizzes, analogies, and no jargon. Earn XP, keep streaks, and collect trophies while tracking progress across 80+ interactive lessons. The app is free to start and is available now on Android, with iPhone coming soon.
The Spool List is a peer-to-peer fabrication marketplace that connects buyers with verified makers in 3D printing, laser cutting, CNC, embroidery, metalwork, and more. Post a job, get fixed-price quotes or open bids, and pay securely through escrow. Track progress in real time, message your maker, and release funds only when satisfied. The platform supports card and USDC payments, has zero buyer fees, and includes dispute resolution, making it easy to source anything from a single prototype to small-batch production.
In celebration of Route 66’s 100th anniversary, Google Maps is rolling out two new ways to help you explore it, virtually or IRL.
Preferred Sources is now rolling out globally in all supported languages, giving users more control over the news they see on Search.
Built for the next generation of Search, AI Max for Shopping campaigns helps retailers reach shoppers the moment discovery begins.
Upgrade to AI Max to simplify your workflow and access our most advanced tools.
As AI Max turns 1, we’re helping you capture even more opportunities in the expanding Search universe. 
Google’s Preferred Sources now supports all languages, not just the English language. “Preferred Sources is now rolling out globally in all supported languages,” Google wrote on its blog this morning.
“This feature gives you more control over the news you see on Search by letting you choose the outlets and sites you want to appear more often in Top Stories,” Google added.
In December, Google rolled out preferred sources globally but it only supported English. Now it supports all languages globally as well.
Stats. Google added some interesting data including:
Preferred Sources. Preferred Sources let searchers star publications in the Top Stories section of Google Search, and Google uses that signal to show more stories from those starred outlets. The feature entered beta in June, rolled out in the U.S. and India in August, and is now expanding globally.
How it works. You click the star icon to the right of the Top Stories header in search results. After that, you can choose your preferred sources – assuming the site is publishing fresh content.
Google will then start to show you more of the latest updates from your selected sites in Top Stories “when they have new articles or posts that are relevant to your search,” Google added.
More details can be found over here.
Why we care. Traffic from Google Search is hard and if you can get your readers, loyal readers, to make your site a preferred source, that can help. Google said those users are twice as likely to click, which can help drive more traffic.
So add the preferred source icon to your site and encourage users to sign up. You can make Search Engine Land a preferred source by clicking here.

The difference between a 2% margin and a 20% margin increasingly comes down to whether you’re renting attention or owning the answer.
For years, search rewarded the ability to buy visibility. That model is weakening.
As AI systems increasingly resolve queries without a click, the value shifts from traffic acquisition to answer formation.
When you move from buying clicks to engineering answers (i.e., structuring content so it can be surfaced, cited, and trusted by AI systems), you change what you own. Instead of renting placement, you build answer equity: durable inclusion in the outputs that shape decisions.
The goal isn’t to turn off paid search. It’s to stop relying on it as your primary source of demand. Over time, this can lower acquisition costs and reduce volatility, because you’re not competing for every impression.
To operationalize this shift, you need a content structure that maximizes what AI systems can extract. Think of it as an “atomic sandwich.”
An atomic sandwich content structure shifts the focus from chasing traffic to maximizing intent density. Here’s how:
Most organizations treat their search budget like a high-interest payday loan.
You keep pouring cash into the paid bucket for that immediate hit of traffic, and it feels like you’re winning.
But the moment you stop feeding the meter, your brand disappears.
For many organizations, this isn’t just marketing inefficiency — it’s an organizational risk.
In the emerging Answer Economy, your rented audience is evaporating. Data from Seer Interactive (Sept 2025) shows paid CTR on informational queries has dropped 68% when Google’s AI Overviews are present.
You’re not just paying for clicks. In many cases, your paid traffic contributes to awareness that AI systems can later satisfy without requiring a click.
The “box” has changed.
Here’s the structural leak in your balance sheet: to survive 2026, you must stop buying a crowd and start engineering the answer.
If your brand isn’t among the trusted sources behind the machine’s answer, your visibility — and influence — shrinks significantly.
We’ve moved from a search engine that directs users to a generative engine that validates information. Every dollar you spend on ads to cover a lack of E-E-A-T is money you’re burning.
The data is clear: appearing in search results is no longer a viable model on its own.
The goal is no longer just to rank in search, but to be consistently included among the sources AI systems rely on.
Without trust, you’re paying for ghost impressions.
In the old box, you could survive by being loud. In the new box, you survive by being certain.
Most companies are in organizational denial.
You see the cost of rented clicks rising and quality falling, but you’re too afraid to stop because you’ve neglected your information architecture and have no foundation. That’s a balance sheet liability.
Use this checklist in your next review to find where your Answer Equity is leaking.
Stop rewarding word count. Every piece of content must deliver a “meat” layer — information gain a retriever can’t synthesize from the rest of the web. That’s how you reclaim your margins.
Dig deeper: Information gain in SEO: What it is and why it matters.
Stop treating schema as a technical extra. It’s your trust score on the digital exchange. Ensure your authors have strong provenance so AI retrievers can instantly crawl and confirm your expertise.
Dig deeper: Decoding Google’s E-E-A-T: A comprehensive guide to quality assessment signals.
If your traffic drops but lead quality holds, you’re winning. Focus on users who bypass the summary because they need the deep, forensic expertise only you provide.
Dig deeper: Measuring zero-click search: Visibility-first SEO for AI results.
The shift from renting an audience to owning the answer is the most significant strategic pivot your organization will make this decade. It moves you from a marketing expense to a balance sheet asset.
The paid trap offers a temporary high but leads to a fiscal dead end. Every dollar spent there is consumable — used once and gone when the auction ends.
When you move that capital into your information infrastructure, you stop paying for the privilege of being ignored. You start building a digital entity that owns its facts, earns trust, and controls its future in the Answer Economy.
Your first step: don’t boil the ocean.
Take your top-performing paid landing page and run the seven-point health check. If it’s a “zombie fact” environment, engineer information gain back into the page.
Stop asking for a ranking report; start asking for an entity audit.
The 2026 organization isn’t defined by how much it spends to rent an audience, but by how much it proves it owns the answer.
You have the blueprints. You have the data. Now stop funding the payday loan and start building answer equity.

Across 90 prompts we tested in ChatGPT, commercial prompts triggered web searches 78.3% of the time. Informational prompts did so just 3.1%.
That gap changes what you should write if you want to appear in a ChatGPT answer.
ChatGPT doesn’t pull every response from the same place. Some answers come from training data; others use live web search — a behavior called query fan-out. The model expands your prompt into multiple background searches, then retrieves and synthesizes across those subtopics. If your page isn’t on those branches, it won’t be pulled in.
So the question is no longer just how to rank. It’s which pages open the fan-out door in the first place.
In our sample, informational pages didn’t. Read on to discover where the system went instead.
We tested 90 prompts across three industries: beauty, legaltech/regtech, and IT. We analyzed prompt intent, downstream query expansion, and the intent those expansions reflected.
Here’s the breakdown and the core finding: most queries aligned with commercial intent, not purely informational prompts.
Query fan-outs change the content game because the system isn’t limited to the literal prompt.
It expands the request into multiple background searches, then retrieves and synthesizes across those subtopics.
Fan-outs trigger parallel web searches tied to the initial prompt, creating opportunities for retrieval, mention, and link citation.
Multi-query expansion is a core design pattern in modern generative search systems. Google describes AI Mode this way: it breaks a question into subtopics, searches them in parallel across multiple sources, then combines the results into a single response.
That raises a strategic SEO question: should you invest more in top-of-funnel educational content, or in lower-funnel comparison, shortlist, and recommendation content?
This experiment framed that problem.
The objective was to test, across selected industries, where fan-out appears by intent category: informational, commercial, transactional, or branded.
The initial hypothesis was direct: informational prompts wouldn’t trigger fan-out, while commercial prompts would, and those fan-outs would stay at the same funnel level or move lower.
We found that ChatGPT-generated fan-outs are overwhelmingly associated with commercial intent.
Disclaimer: This experiment measures observed prompt expansion behavior in ChatGPT. Google AI Mode is cited only as context to show multi-query expansion as a broader pattern in generative search, not as proof of ChatGPT’s internal architecture.
The core sample includes 90 numbered prompts, heavily weighted toward informational intent.
| Prompt intent | Prompts | Share of sample | Prompts with fan-out | Fan-out rate |
| Informational | 65 | 72.2% | 2 | 3.1% |
| Commercial | 23 | 25.6% | 18 | 78.3% |
| Branded | 1 | 1.1% | 0 | 0.0% |
| Transactional | 1 | 1.1% | 0 | 0.0% |
The sample skews heavily toward informational prompts, with some commercial ones and minimal branded and transactional queries.
We structured the experiment around the sectors in the brief: beauty/personal care, legaltech/regtech, and IT/tech.
The main finding is clear.
Out of 90 prompts, 20 triggered fan-out. Of those, 18 were commercial and 2 informational.
Informational prompts made up about 10% of fan-out triggers (2 of 20). When they did trigger expansion, they were rewritten into more evaluative, solution-seeking subqueries.
In other words, 90% of fan-out-triggering prompts in the core sample came from commercial intent.
The contrast is stronger than the raw totals suggest. Commercial prompts triggered fan-out 78.3% of the time; informational prompts did so just 3.1%.
This supports the working hypothesis: in this sample, fan-out was overwhelmingly a commercial phenomenon.
Those 20 prompts produced 42 fan-out queries — an average of 2.1 per triggered prompt.
Of those 42 fan-out queries:
Even when a prompt triggered expansion, the system usually shifted toward comparison, product evaluation, feature filtering, shortlist creation, or brand-specific exploration — not broad educational discovery.
The experiment used 90 prompts across three industries, mostly informational, with a smaller set of commercial prompts and minimal branded and transactional queries.
In the analysis, we have:
The analysis then followed three steps:
That produced two distinct but complementary views:
That distinction matters: the first shows which prompts open the fan-out path, while the second shows where the system goes once it opens.
The cleanest interpretation is that, in this sample, fan-outs behave less like open-ended topic expansion and more like assisted decision support.
Commercial prompts almost always opened the door.
Once they did, fan-outs usually stayed commercial.
The system expanded into comparisons, feature-based filtering, product lists, pricing-adjacent queries, and brand-specific evaluations.
A few examples make that concrete.
The two informational exceptions are even more revealing than the rule.
So, even when the prompt starts broad, fan-out often translates that breadth into a lower-funnel retrieval path.
The takeaway isn’t to stop writing informational content.
It’s this: informational content alone is unlikely to align consistently with fan-out expansion, at least in this dataset.
If your goal is visibility in AI answers tied to product selection, vendor discovery, or option narrowing, you need stronger coverage of pages and passages that match those downstream commercial branches.
That may include:
In practical terms, your content model shouldn’t be just ToFU or BoFU, but ToFU with commercial bridges.
A broad article can still help, but it should include passages the system can easily reformulate into decision-support subqueries.
A purely educational piece that explains a category without naming products, tradeoffs, features, use cases, pricing logic, or selection criteria is much less likely to align with the fan-out paths seen here.
Put simply: Don’t just answer the obvious question — anticipate the next evaluative step the system is likely to generate in the background.
This result is directional, not universal.
The next version of this experiment should isolate the question more aggressively and expand the dataset.
A follow-up should map triggered fan-outs back to specific content formats.
The goal isn’t just to confirm that commercial intent wins. It’s to identify which page templates and passage structures best cover the fan-out branches AI systems prefer.

I keep hearing people say AI understands their brand. It doesn’t. Let’s get that out of the way first.
What it does is pattern-match at scale. It compresses your positioning, product, proof, and tone into a bundle of signals it can retrieve and remix at speed.
Those patterns come from two places:
So “AI SEO” isn’t a new channel. It’s a new representation problem: which version of your brand gets encoded, retrieved, and repeated.
Most brands are already in the game. They’re just not playing with purpose.
Classic SEO was a library problem. You publish a URL. Google indexed it. A human searched and found it.
AI search is a conversation that stretches out the demand curve. Head terms still drive the majority of visibility, but, ever so slowly, more volume is moving into context-heavy prompts.
Your job is to be the most relevant match inside a model’s memory and retrieval pipeline.
Not by being ranked. But by being represented.
AI doesn’t run on opinions. It runs on associations.
Classic SEO competed for keywords. Then it shifted to entities. AI systems go one layer deeper. They turn entities into vectors.
Your brand becomes a coordinate in dimensional space. Close to some concepts. Distant from others. Pulled by whatever your content and mentions repeatedly associate you to.
If your brand is consistently associated with “enterprise analytics”, “real-time dashboards” and “data governance”, your vector lives near those clusters.
If your messaging sprawls into adjacent territory because someone got bored of writing about the same things, the vector spreads. Precision drops. The model still has a position for you. It’s just fuzzier, less confident, and easier to swap for a competitor with cleaner signals.
Before you “fix AI SEO,” identify which layer your brand is failing on. The same tactics don’t work everywhere.
Your historical footprint. Press, blogs, documentation, reviews, every old thread on a forum you forgot existed.
You can’t fully control it.
But you can reduce fragmentation by finding and editing all possible past mentions (social profiles, directory listings, wikis, etc) to create a consistent identity across the internet.
Understand the training layer by asking an AI chatbot to describe your brand with web search turned off.
Your live surface area. Indexed pages, product feeds, APIs. This is where traditional technical SEO of crawling, indexing and rendering matter most. It defines what the AI system can access for citations.
Understand the retrieval layer by running branded intent and market category intents prompts daily using a LLM tracker and reviewing which sources are consistently cited.
That is the output seen in AI Overviews, AI Mode, ChatGPT or whatever your brand gets reassembled in front of an actual customer. Your brand will be written into the answer only if it’s a must.
So ask yourself, what unique, quotable, additive content forces the LLM to mention you?
Understand the generation layer by using the same LLM tracker data, but reviewing brand mentions within responses and their semantic associations.
Think of these as the forces quietly shaping your representation across the layers.
AI systems merge different references to the same brand if it’s obvious they belong together.
Most brands don’t have one clear identity. They often have:
Humans merge that automatically. Models don’t. They consolidate by pattern, not intent. Every inconsistent self-reference is a vote for fragmentation.
Allow your brand to be written five different ways and split your visibility signals five times.
Models learn what appears together:
Repeat the right pairings, and the association strengthens. Be inconsistent, and it weakens. It’s genuinely that simple.
Models track who is being described, by whom, in what context.
Your own site is one layer. Third-party mentions are another. High-trust sources carry more weight.
Not because of “authority” in the classic SEO sense, but because they appear frequently inside reliable contexts in the training data and retrieval corpora. Similar outcome. Different mechanisms.
When generating answers, AI systems decide which information to use. That decision depends on clarity, relevance, uniqueness, and ease of extraction.
If key facts are buried in narrative copy, implied through metaphor, scattered across sections, the model will simply pull from somewhere else.
On the other hand, if you repeat them, structure them, and make them explicit, you are more likely to be chosen by the model.
In your content, on-page and off-page, make the core entities unmissable. Your brand. Your products. Your categories. Your audience. Your differentiators.
Craft a clear, consistent, canonical positioning that the machine can’t misread by creating a canonical brand bio:
[Brand] is a [market category] for [audience] who need [use case], differentiated by [proof].
Then, honestly ask yourself if your answer could also describe your competition. Or better, ask AI that question. If the answer is yes, rewrite it’s unmistakably you.
Then roll out that positioning everywhere. On-page with “retrieval-ready” chunks, in structured data, in “sameAs” references, industry publications, partner sites, user reviews, community discussions, social posts.
Repeat key associations deliberately across pages until it feels excessive. Reduce unnecessary variation in terminology. Then the associations strengthen. Are reinforced. Compound.
Beware brand drift, where inconsistencies allow misrepresentations, and a lack of information allows hallucination to creep in. Police all the edges. Consolidate or kill the pages that introduce conflicting descriptions of your brand.
This is not about gaming AI. It is about reducing entropy.
If that sounds boring, good. The brands that win the AI era are not going to win it with cleverness. They are going to win it with discipline.
Because if answers are inconsistent across sources, your brand won’t be cleanly encoded. And the version of you that AI systems are quietly passing along to customers won’t be the one you intended.
If AI systems can’t confidently represent your brand, they will default to a safer option. Usually, it’s a competitor with cleaner signals. Not because that competitor is “better”. Because that competitor is easier for the machine to use.
AI doesn’t need to understand your brand perfectly. It needs to approximate it well enough to recommend you. Your job is to control that approximation through consistency, structure, and distribution.
Not by publishing more. By making your brand impossible to misunderstand.

Google is doubling down on AI-driven ads just as search behavior shifts toward conversational queries, giving advertisers more automation while trying to preserve control.
What’s new.
AI Max expands beyond Search: Now rolling out to Shopping campaigns and travel-specific formats, broadening reach across more advertiser types.


AI Brief (powered by Gemini): A new interface that lets advertisers steer AI using natural language inputs.
Text disclaimers + URL automation: Compliance-friendly updates to pair with automated landing page selection.
Why we care. Google is making AI Max a core layer across Search, Shopping and Travel, meaning automation will increasingly determine how ads are matched to user intent. This update expands reach into more conversational, high-intent queries that traditional keyword strategies miss, helping brands capture demand earlier in the journey.
At the same time, tools like AI Brief and new compliance features give advertisers more control over messaging and targeting, reducing the risk of fully automated campaigns feeling like a “black box.”
Shopping gets smarter. For retailers, AI Max for Shopping uses Merchant Center data to generate more adaptive ads that can respond to long-tail and exploratory queries, helping brands appear earlier in the discovery phase rather than only at the point of purchase. The rollout is positioned as a simple upgrade for existing Shopping campaigns, suggesting Google wants rapid adoption.
Travel gets consolidated. Travel advertisers get a consolidation play. Search Campaigns for Travel bring previously fragmented formats into a single interface with unified reporting and integrated AI Max capabilities. The move reduces operational complexity while reinforcing Google’s push toward centralized, AI-driven campaign management.
More control with AI Brief. The most notable addition is AI Brief, which attempts to solve a long-standing advertiser concern: lack of compliance control in automated systems. Advertisers can define messaging rules, specify which queries to prioritize or avoid, and shape how different audiences are addressed. The system then generates previews, allowing feedback before campaigns go live.
Automation meets compliance. Google is refining how traffic is directed to websites. Final URL expansion uses AI to select the most relevant landing page for each query, and the new text disclaimer feature ensures required legal messaging remains intact even when automation is active. This signals a push to make AI usable in more regulated industries without sacrificing compliance.
The bottom line. AI Max is evolving from a Search add-on into a foundational layer across Google Ads, combining automation, cross-format reach and advertiser input to adapt to a more AI-driven, conversational search landscape.




LG delivers ultra-sharpness with its 5K Hyper Mini LED 27GM950B monitor LG has officially released its new UltraGear Evo AI GM9 (27GM950B) 5K Hyper Mini LED monitor. This new 27-inch gaming screen boasts a 5K resolution with a maximum refresh rate of 165Hz. Furthermore, with Dual Mode, this screen also supports 1440p at up to […]
The post LG launches its first UltraGear Evo Hyper Mini LED 5K Gaming Monitor appeared first on OC3D.
Aleyda Solis's Similarweb analysis of 10 markets shows that AI search clicks frequently redirect to local domains, with the distribution differing by industry.
The post AI Search Clicks Often Go To Local Domains: Report appeared first on Search Engine Journal.
Google adds AI Brief and text disclaimers to AI Max. See how new controls help regulated advertisers adopt automation while maintaining compliance and messaging accuracy.
The post New: AI Brief And Text Disclaimers Come To Google AI Max appeared first on Search Engine Journal.
Google expands AI Max to Shopping and Travel campaigns. Learn what’s changing, how it works, and what advertisers should prepare for ahead of broader rollout.
The post Google Launches AI Max For Shopping and Travel Campaigns appeared first on Search Engine Journal.
Microsoft says Bing reached 1B monthly active users, as search ad revenue grew 12% and Edge gained share for the 20th straight quarter.
The post Microsoft Says Bing Reached 1B Monthly Active Users appeared first on Search Engine Journal.

After a slip-up by Fuse Games spoiled the surprise a few days early, we officially learned today that Star Wars Galactic Racer will arrive on PC, PS5, and Xbox Series X/S on October 6, 2026. We also got a full breakdown of the different editions of the game that'll be available, and something that didn't leak, which was the full release date trailer with some incredible-looking fast-paced gameplay. After Fuse Games quelled any fears that Star Wars Galactic Racer wouldn't include pod racers, today's official release date trailer is sure to put the iconic vehicles front and center, with several […]
Read full article at https://wccftech.com/star-wars-galactic-racer-editions-release-date-fuse-games-official-reveal-gameplay/

MSI's upcoming Claw 8 EX AI+ Gaming handheld, which will be powered by the Intel Arc G3 Extreme SoC, has been listed online. MSI's Next-Gen Claw Handheld Spotted At Italian Retailer: Features Intel Arc G3 Extreme SoC, 32 GB Memory & 1 TB Storage Intel will soon be introducing its next-generation handheld SoCs called Arc G3. These will come in two flavors: a standard G3 and a high-end G3 Extreme. The Arc G3 series is Intel's big entry into the gaming handheld through dedicated SoCs, similar to what AMD does with its Ryzen Z series SoCs. Now, the first handheld […]
Read full article at https://wccftech.com/msi-claw-8-ex-ai-gaming-handheld-with-intel-arc-g3-extreme-listed-online-1599-euros/

The new flagship motherboard boasts an all-white design, boasting incredible VRM, powerful connectivity, and a powerful feature-set for high-end and flagship Ryzen CPUs such as Ryzen 9 9950X3D2 Dual Edition. ASRock Introduces An All-White X870E Taichi White Motherboard, Featuring 27 Power Phase VRM, PCIe Gen 5.0 Support, and Modern Connectivity Popular hardware manufacturer, ASRock, has debuted a new flagship motherboard for the high-end Ryzen CPUs called X870E Taichi White. The X870E Taichi White is the first-ever all-white flagship Taichi series motherboards that brings a new color scheme to the lineup. The motherboard uses fully white PCB, white heatsinks, and white […]
Read full article at https://wccftech.com/asrock-launches-x870e-taichi-white/

Atomfall, a very British take on games like Fallout or STALKER from Rebellion that landed on PC and consoles in March 2025, is the latest recent release to get its own adaptation. Two Brothers Pictures, the production company behind hit shows like Fleabag and The Assassin will lead up a TV show adaptation of the game, which also just took home the BAFTA for Best British Game two weeks ago. News of the adaptation comes via a report from Deadline, who add that the Two Brothers Pictures founders, Harry and Jack Williams, will also be creative leads for the adaptation […]
Read full article at https://wccftech.com/atomfall-tv-show-adaptation-fleabag-producers-lead/

We've known since the game's announcement that The Blood of Dawnwalker would not be quite as large as The Witcher III: Wild Hunt. This makes perfect sense, as Rebel Wolves is smaller than CD Projekt RED was when it made its masterpiece. However, what exactly is the scope that players can expect in the full game? As reported by Gamereactor, Creative Director Mateusz Tomaszkiewicz revealed that The Blood of Dawnwalker turned out to be even bigger than the studio had originally planned. The goal was to target a 40-hour average playthrough, but now that all the content is in place […]
Read full article at https://wccftech.com/blood-of-dawnwalker-playtime-40-70-hours/

PubQ is a scheduling tool built specifically for Substack Notes. Substack has no native scheduler for Notes, only for long-form posts. PubQ fills that gap: a Chrome extension captures your Substack session and posts your Notes automatically at your chosen time.
The workflow is simple: write your Notes in batches, set a schedule, and PubQ handles posting at peak times while you do other things. A free tier is available with no card required.
PRDFlow connects to your GitHub repo and auto-translates every code merge into stakeholder-ready updates, requiring no engineer effort. Founders see business impact like "Payment processing is live, start charging customers," PMs see roadmap progress such as "3 of 5 acceptance criteria met," and sales sees what's demo-ready like "SSO is live, safe to promise clients." No more status meetings, Slack archaeology, or interrupting engineers in deep work. PRDFlow replaces the broken game of telephone between engineering and the rest of the company with a single source of truth pulled directly from the code.
A new agreement between Google and OG&E to protect energy affordability in Oklahoma. 
AI may not see your brand the way you think it does, according to Scott Stouffer, co-founder and CTO at Market Brew.
Brands still publish content, optimize pages, build authority, and follow SEO best practices. But that may not be enough anymore.
Search has moved away from a simple battle over keywords, links, and page-level signals. It’s now shaped by meaning, intent, embeddings, and retrieval, Stouffer said during his SEO Week presentation.
In legacy SEO, a page could rank lower and still exist in the search results. In AI-driven systems, the first question isn’t whether you rank. It’s whether you’re ever retrieved.
“If you’re not retrieved, you do not exist to AI,” Stouffer said.
Your brand already exists inside AI systems as a mathematical object. You may call yourself one thing. Your homepage may say another. Your brand guidelines may promise a clear position. But AI systems build their own view of your brand from the content you have published.
That computed version of your brand may be different from the one you intended to build.
AI visibility begins before ranking, Stouffer said.
In traditional SEO, marketers focus on positions — first, third, or tenth. But AI systems apply a filter earlier. Before anything is ranked, the system determines which content is eligible for consideration.
That is retrieval.
When a user asks a question, the system pulls a limited set of passages or chunks that best match the query. Those passages define the answer space.
If your content isn’t included, you get no impressions, no clicks, and no visibility at all, Stouffer said.
The real shift is moving from exclusion to inclusion.
“You don’t lose. You just never entered the game,” Stouffer said.
AI systems don’t treat a webpage as one clean unit, Stouffer said. They don’t evaluate pages as whole objects or prioritize layout, structure, or formatting.
Content is broken apart. A page becomes chunks: passages, sections, and individual ideas.
Each chunk is evaluated independently. A paragraph deep in a guide can compete on its own. A single sentence can be selected if it aligns closely with the query.
This shifts competition from page versus page to passage versus passage.
Most of a page may never be considered. Only the most aligned chunks are evaluated.
Each chunk is converted into a vector, Stouffer explained.
This vector represents meaning as a position in a high-dimensional space. It captures context and intent rather than exact wording.
Two pieces of content can use different words but sit close together if they express the same idea. Others can share keywords, but sit far apart if they represent different meanings.
“It’s comparing meaning, not wording, measuring distance, not keyword overlap,” Stouffer said.
Relevance is determined by proximity. The closer a chunk is to a query in this space, the more likely it is to be retrieved.
As chunks are mapped into this space, they group together.
Content with similar meaning forms clusters, even across different pages. These clusters reflect how AI systems understand topics.
This understanding comes from how content naturally groups by meaning, not by site structure or labels, Stouffer said.
If content is consistent, clusters become dense and clear. If content is scattered, clusters become fragmented.
What matters is not what a brand intends to say, but what its content actually communicates.
Within these clusters, there is a center point — the centroid, Stouffer said.
The centroid represents the average position of all related content. It reflects the site’s core meaning.
Every page and paragraph influences that position. Consistent content creates a clear, stable centroid. Inconsistent content dilutes it.
That centroid is how AI understands your brand.
Not your homepage. Not your messaging. Not your brand guidelines.
Your centroid is the combined signal of everything you have published, Stouffer said.
“Your centroid doesn’t care about intent. It reflects the math of everything you’ve ever published,” Stouffer said.
This changes how content should be evaluated.
The key question isn’t whether a page is optimized in isolation. It’s whether it aligns with the rest of the site.
Each page either strengthens the centroid or pulls it in a different direction.
“Optimization without alignment creates drift, and drift is what breaks consistency,” Stouffer said.
As drift increases, the site becomes harder for AI systems to interpret and retrieve.
“You don’t write pages, you project meaning,” Stouffer said.
When a query is entered, the system converts it into a vector, Stouffer said.
It then searches for the closest matches in meaning space.
This includes both individual chunks and the centroids that represent broader content clusters.
If your content is close enough, it enters the candidate set. If it is too far away, it is excluded.
Only after this stage do traditional ranking signals apply.
Content quality, links, and structure matter — but only if the content is first retrieved.
If not, those signals are never evaluated, he said.
Many brands follow similar strategies, use the same sources, and produce similar content.
As a result, their centroids converge in the same region, Stouffer said.
He described this as cluster collision.
When multiple brands occupy the same space, AI systems don’t select all of them. They choose a few and ignore the rest.
“They’re not failing best practices. They’re colliding with everyone else using them,” Stouffer said.
Producing more content or improving existing content isn’t enough. If content remains similar in meaning, it remains in the same space.
“You need a distinct centroid,” Stouffer said.
A clear, separate position in meaning space reduces competition and increases the likelihood of retrieval.
This is not a one-time adjustment.
Every piece of content shifts the centroid.
That requires an ongoing process of measurement and adjustment, Stouffer said.
Teams need to monitor alignment continuously and correct drift as it occurs.
Over time, this creates a more stable system where new content reinforces the existing structure.
Most teams can’t see how their content exists in this system.
They can’t see clusters, centroids, or distances — or why content is excluded.
So they rely on trial and error, Stouffer said.
They publish, optimize, and wait for results. When nothing changes, they try something else.
Without visibility into the system, they react to outcomes rather than understanding causes.
Your brand already exists as a mathematical object inside AI systems, Stouffer said.
You do not get to choose that.
You only choose whether to measure and control it or let it drift.
AI does not see your brand the way you describe it. It sees the aggregate meaning of your content.
“If you control your centroid, you control your visibility,” Stouffer said.

For more than two decades (nearly as long as I’ve been in SEO), backlinks have been core to SEO. Google’s PageRank changed search by using backlinks as a proxy for trust.
A link wasn’t just a pathway; it was a vote. The more votes you had and the more authoritative the voters were, the higher you ranked.
But as Google and AI systems matured, entity-based understanding emerged. AI models became better at understanding content, context, and credibility without always needing a hyperlink as a crutch.
Today, visibility isn’t driven solely by links. It’s strengthened by the broader signals your brand has earned: how often it’s mentioned, cited, and trusted across authoritative sources.
Search engines and AI platforms now prioritize these signals.
Modern AI systems can evaluate trust and expertise in ways that were impossible a decade ago. AI has changed how authority, trust, and expertise are measured. It can now assess authority through signals once approximated mainly by backlinks.
AI can:
A brand mention in a reputable publication—even without a link—reinforces entity authority. Consistent expert citations validate expertise. These signals can’t be faked.
The result is a new era where links still matter, but they’re no longer the only star. Authority is now a network of signals.
As Google relies less on raw link signals, something else has increased: entities — the people, brands, organizations, and concepts behind the content. Google increasingly showcases brands based on who they are and how they’re discussed across the web, alongside their backlink profile.
At its core, entity-first SEO means Google and LLMs are mapping relationships: identifying brands, understanding what they’re known for, and evaluating how they’re referenced in trusted sources.
For example, an outdoor gear company with a modest backlink profile began appearing in AI Overviews for “best hiking backpacks” after repeated mentions in Reddit threads, YouTube reviews, and a few expert roundups. Only some mentions included links, but the brand appeared consistently in trusted, topic-relevant conversations. Google interpreted those unlinked mentions as proof of real-world relevance.
If your brand consistently appears in a positive light in topic-related conversations, AI sees that as proof you’re relevant and trusted. The brands that win now have the strongest entity presence.
PR-style links and editorial coverage are earned mentions in reputable publications — the kind that signal real-world authority, not algorithmic manipulation.
Old-school, volume-based link building is less effective as AI improves at detecting manufactured patterns. But high-quality, relevance-driven link building—especially when paired with PR signals—is more valuable than ever.
Editorial PR links from journalists, analysts, and industry voices who choose to reference a brand because it’s newsworthy or authoritative reflect genuine credibility. They’re the digital equivalent of a trusted expert saying, “This brand matters.”
| Authority-Based Link Building | Volume-Based Link Building |
| Strong editorial context | Thin or generic content |
| High topical relevance | Limited relevance |
| Natural language anchors | Over‑optimized anchors |
| Trusted authors and publications | Sites with weak editorial oversight |
| Clear entity associations | Obvious link‑selling footprints |
AI doesn’t just look at the presence of a link; it evaluates the context around it. Models are trained to reward authenticity. Search aims to reward the most authoritative entities.
The real power comes from a combination of signals. As search has evolved, quality has become more powerful than quantity.
Now AI is driving another shift. You can grow traditional, relevance-focused links alongside new brand signals.
A single earned placement done well can generate:
This is multi-signal authority — holistic credibility that AI systems are designed to reward. It tells Google and LLMs: you’re known, trusted, and relevant. You need to be part of the conversation.
As powerful as PR signals are, they’re only one part of a larger authority ecosystem. AI evaluates brands through a multi-signal trust profile that determines visibility.
Authority is now defined by the breadth and consistency of signals that validate who your brand is across the web. It’s evaluated as humans do: reputation, recognition, expertise, and prominence.
Authority is no longer a single metric tied to links. It’s a network of signals, including:
Together, these signals create a holistic authority profile that AI can interpret. The brands that win have the strongest multi-signal authority footprint.
Brand strength quietly outweighs other signals. The data shows it: brands in the top 25% for web mentions average 169 AI Overview citations, while the next quartile averages just 14.
That’s not a small gap.
This aligns with Ahrefs’ analysis of ~75,000 brands. The strongest correlations with appearing in AI Overviews were branded web mentions, branded anchors, and branded search volume—all signals of real-world brand presence.
Consider two competing fitness apps. One has thousands of backlinks from generic listicles. The other is frequently mentioned in Reddit threads, YouTube reviews, and TikTok “day in the life” videos. The second app appears consistently in AI Overviews because AI sees it as part of the real-world fitness conversation, not just the link graph.
The brands dominating AI Overviews have the strongest brand presence, supported by consistent links, mentions, citations, and contextual relevance.
By 2027, link building will undergo radical change. The shift from a numbers game to a confidence game will become the norm, and Share of Authority or Voice will be the new metric.
Here are my top three predictions for what’s next.
Link building will expand to include “seeding” information in AI training hubs. Instead of mass outreach to low-tier blogs, strategies will target user-preferred sources like Reddit, LinkedIn, Substack, and GitHub, which LLMs use for high-quality, human-led data.
Brands that appear most often in training data, trusted sources, and high-authority conversations will earn visibility. This is the next step in a world where signals determine authority.
| Traditional Metric | Predicted Metric | Why the Change |
| Backlink Count | Entity Citation Frequency | AI values brand mentions as much as links |
| Domain Authority (DA) | Source Reliability Score | Focus on the trustworthiness of the source |
| Anchor Text | Semantic Context | AI reads the intent around the link, not just the text |
| PageRank | Share of Model (SoM) | Success is being the AI’s preferred answer |
As AI systems rely more on multi-signal authority, proprietary data becomes one of the most powerful assets a brand can produce. Data isn’t just content — it’s a signal engine. It naturally earns the signals AI trusts most:
Traditional link building still provides foundational authority, but data-driven assets are the accelerant. They create high-trust, high-context signals that AI models weigh heavily.
On a platform where visibility depends on how often your brand appears in authoritative contexts, proprietary data is the most scalable way to increase your Share of Authority.
Traditional contextual links will continue to build the foundation. But beyond that, search engines will track every time your brand appears alongside specific topics. Links will need “semantic context.”
Every mention of your brand in news, podcasts, reviews, forums, social posts, and roundups becomes a signal that strengthens your entity.
The future of off-page SEO isn’t a battle between traditional link building and AI-driven signals. It’s the realization that links were always just one signal. Now search engines can understand dozens more.
Traditional link building still matters. It provides the foundational authority, crawl paths, and topical relevance every site needs.
AI has widened the field. It can read context, interpret sentiment, understand entities, and evaluate brand presence.
These signals don’t replace links — they amplify them.
Links built the foundation.
Signals build the skyscraper.


Ask any paid search manager who has tried to get an AI agent to do something genuinely useful with a Google Ads account and you will hear a version of the same story. They exported performance data, pasted it into a chat window, got a solid answer, and then did the exact same thing the next day.
Exporting, pasting, repeating — that isn’t automation. That’s the same manual work you were doing before, performed in a different window.
The AI tools are not the problem. Any of the major ones can do solid analysis when the right data is in front of them.
The problem is getting that data to them live, current, and without a human in the middle copying it across. It’s the reason most PPC accounts in 2026 still run almost exactly the way they did before anyone started talking about agents. Call it the data wall.
Every ad platform is a silo by default. Google Ads records a conversion. Your CRM records whether that lead is qualified. Your inventory system records whether the product behind that click is still on the shelf. None of them talk to each other without deliberate plumbing.
PPC managers have bridged that gap manually for years: weekly exports, cross-referenced spreadsheets, dashboards that were stale by Monday morning.
That was workable when a human was doing the bridging on a set schedule. It becomes a structural problem the moment you hand execution over to an agent that must act in real time.
Take a keyword showing healthy volume, an acceptable CPA, and a CVR in range — all according to Google Ads. In HubSpot, those same conversions are tagged as disqualified leads: wrong territory, no budget, wrong company size entirely. The agent has no way to know. It keeps bidding. The budget keeps spending. And the problem doesn’t surface until someone runs the monthly review.
That is a data access problem, not a prompting problem. Better prompts don’t fix it. But a better pipeline does.
The Model Context Protocol (MCP) is an open standard that lets AI clients connect to external tools and data sources without a custom integration for each one. Before MCP, getting an agent to read from Google Ads, your CRM, and an inventory system meant building and maintaining three separate connectors, with the burden compounding every time you added a source.
MCP standardizes the handshake. A platform publishes an MCP server once, and any compatible AI client — Claude, ChatGPT’s agent mode, your team’s custom agent — can connect to it.
Google has already open-sourced its Ads API MCP server on GitHub, which allows agents to run Google Ads Query Language (GAQL) queries directly against live account data. The infrastructure problem that has blocked most real-world agentic PPC work is finally being addressed at the platform level.
The CRM gap closes first. An agent connected to both Google Ads and HubSpot can pull last month’s conversions, cross-reference them against CRM disposition, identify the keywords producing disqualified leads, and lower bids on those sources — on a schedule, without a human compiling the report. A loop that used to swallow half a day runs automatically.
Inventory creates the same kind of blind spot. An agent connected to Shopify can check stock levels before weekend campaigns go live. When an SKU drops below the threshold, the corresponding product group is paused before traffic hits a page that no longer converts.
Even the data-pipeline work itself gets faster.
On a recent “PPC Town Hall“ episode, Lars Maat — a PPC expert and agency founder in Rotterdam — described building a Python pipeline with no prior Python experience, connecting the Google Maps API, Google’s Things To Do feature, and Ahrefs to generate optimized landing pages for a parking client to identify nearby attractions, check search volumes, and feed the content to a generator.
The whole thing was live in two weeks. The only constraint was getting the right data in front of the AI and not what it could do.
Here’s where things get interesting, and where most of the MCP hype is skating past a real issue.
Write access to a live Google Ads account, in the hands of a probabilistic language model, without institutional constraints, is a new category of risk. An agent that can pause a campaign needs defined parameters: what threshold triggers the action, who gets notified before it fires, which campaign types require human sign-off. Those parameters don’t exist inside the AI tool. They have to be built around it.

Advertisers can grant granular permissions to the Optmyzr MCP to stay in control of what the connector is allowed to do on its own, what it can never do, and what it can do with human approval.
Advertisers can grant granular permissions to the Optmyzr MCP to stay in control of what the connector is allowed to do on its own, what it can never do, and what it can do with human approval.
On another “PPC Town Hall“ episode, Ann Stanley — founder of Anicca Digital and one of the UK’s most experienced paid media practitioners — described effective AI deployment as a sandwich: humans at the front who understand the goal and can give precise instructions, humans at the back who review the output and decide what ships, and AI handling execution in the middle. The quality of what comes out depends on the quality of what goes in and on whether the middle layer has any constraints at all.

This is where raw API access stops being enough.
Google’s open-source MCP server is a good piece of infrastructure. But it is not a safety net. It will happily run any GAQL query and any mutation the agent constructs, and if the agent hallucinates a campaign ID or picks the wrong lookback window, the ad account absorbs the consequences.
LLMs are probabilistic. Ad platform APIs are not. So, something has to sit in between.
We have spent over a decade encoding how Google Ads actually behaves — not just what the API exposes, but the interdependencies between settings, the edge cases around campaign types, the nuances of what makes a “duplicate keyword” a true duplicate versus a false positive. That work lives inside Optmyzr as a business intelligence layer. Our MCP connector is how we let your AI agent borrow it.
When Claude, ChatGPT, or your team’s custom agent connects to the Optmyzr MCP, it gains access to the same Sidekick capabilities your team uses inside Optmyzr: pulling PPC performance reports with rich filtering and segmentation, surfacing configured and triggered alerts, creating and editing alerts, retrieving merchant feed details, summarizing portfolio health across every active account, and — this is the one most people miss — generating and executing a full Rule Engine strategy from a plain-English description of what you’re trying to accomplish.
That matters for three reasons most DIY setups miss:
The end result is an AI agent that operates across your portfolio with the reach of an API, the judgment of a platform that has been in this space since before AI agents were a category, and a safety posture that doesn’t require you to build your own circuit breakers.
If you want to experiment with read-only access across raw ad platforms, Windsor.ai and Zapier’s MCP integration are the fastest on-ramps. If you’re comfortable managing your own guardrails, Google’s open-source Ads API MCP server on GitHub gives you precise GAQL control at the cost of building the safety layer yourself.
If you run client accounts where a misfire is unaffordable — or you just want your AI agent to think across your whole portfolio with the judgment of a senior PPC strategist — the Optmyzr MCP is the fastest path to an agent that is actually safe to give the keys to. It works with Claude Desktop (via custom Connectors or manual config), Claude Code, ChatGPT (via Developer Mode apps), and any MCP-compatible client. And, you can set it up in minutes: generate an API key from the MCP Integration panel in your Optmyzr settings, paste the server URL into your AI client, and your agent is operating across every active account on your Optmyzr profile.

Full MCP setup guide and instructions.
The data wall is coming down either way. The question is whether your agent walks through it with a plan, or a prompt and a prayer.
ASRock expands its AM5 motherboard lineup with its new X870E Taichi White model ASRock has officially launched its new X870E Taichi White motherboard, a high-end offering ready for AMD’s Ryzen 9 9950X3D2 Dual Edition and other future AMD Ryzen CPUs. On the hardware side, the X870E Taichi White features a robust 24+2+1 phase VRM, a […]
The post ASRock unveils ultra-high-end X870E Taichi White AM5 motherboard appeared first on OC3D.
Microsoft confirms a shift to a “quality” focus within its consumer business During Microsoft’s Q3 2026 earnings call, the company’s CEO, Satya Nadella, confirmed that they are doing “foundational work” within its consumer business. Microsoft plans to “win back fans” of Windows, Xbox, Bing, and Edge. By shifting its focus to quality and serving its […]
The post Microsoft CEO confirms “foundational work” to “win back” fans of Windows and Xbox appeared first on OC3D.
The PlayStation 5 DualSense controller features have been put to great use to enhance immersion in Saros, but for many players, adaptive triggers are only delivering frustration. The default configuration uses this feature on L2: a half-press activates Alt-Fire, while a full press through the resistance triggers your Power Weapon. In the heat of combat, it is incredibly easy to accidentally press too hard and waste your Power Weapon when you only intended to use an Alt-Fire ability. If you are struggling with these pressure levels, there is a much more reliable way to play. Separating Alt-Fire from the Trigger […]
Read full article at https://wccftech.com/how-to/saros-how-to-fix-alt-fire-controls-the-circle-button-trick/

While some of the shotguns are very solid weapon choices in the early game thanks to their powerful Alt-Fire modes, there comes a point in Saros where staggering an enemy is no longer enough. To clear the endgame biomes, defeat the final Overlords, and clear the game with ease, you will need the absolute best weapon: the Ripsaw Chakram. The Ripsaw Chakram is a high-skill, high-reward weapon that functions differently from anything else in Arjun’s arsenal. Once you "unwire" your brain from using the other more traditional weapons in the game, you will realize this is arguably the most powerful […]
Read full article at https://wccftech.com/how-to/saros-the-best-weapon-to-melt-bosses-ripsaw-chakram-guide/


AMD’s “bridge die” tech could be revolutionary for Zen 6 AMD aims to deliver a “latency revolution” with Zen 6, at least according to the leaker “Moore’s Law is Dead”. With a “fundamentally better memory controller” and “bridge die” technology, AMD aims to greatly lower memory latencies and boost inter-CCD communication speeds. This addresses the […]
The post AMD to deliver “latency revolution” with Zen 6 Ryzen interconnect tech appeared first on OC3D.
As Nintendo is unlikely to ever release its games on PC, emulation has been the only way for users to enjoy them on hardware other than the original consoles for a long time. However, over the past few years, we have seen the rise of native PC ports, including ports of Zelda: Ocarina of Time and Majora's Mask, which offer advanced features over emulated versions, such as support for higher resolutions. The latest of these is a native port of the original Super Smash Bros., which also serves as an example of the magnitude of what AI can achieve for a project like this, […]
Read full article at https://wccftech.com/nintendo-native-pc-port-super-smash-bros-ai-generated/

The Exynos 2600 will eventually be replaced by the Exynos 2700, as Samsung expands its 2nm GAA chipset portfolio later this year with the introduction of a successor. Until now, we have only heard about rumors and leaks surrounding the SoC. However, during the Korean giant’s Q1 2026 earnings call, it was the first time that the company publicly revealed that it’s developing the Exynos 2700 and intends to extend its market share with the latter’s release. Exynos 2700 to make up a higher percentage of Galaxy S27 shipments, as Samsung looks to reduce dependency on Qualcomm and its Snapdragon family […]
Read full article at https://wccftech.com/samsung-reveals-exynos-2700-details-during-q1-2026-earnings-call/

Ever since the system's release in late 2020, developers have been hard at work expanding the PlayStation 5's functionality beyond what Sony intended, helped by leaks that could lead to permanent jailbreaks. Now, over five years since the console's release, those with an older phat system still running older versions of the system software can turn Sony's current generation into a highly capable Linux PC, and take advantage of its Zen 2 8 CPU cores and RDNA 2 GPU to run emulators and Steam games with impressive fluidity thanks to a new loader released online by Andy Nguyen a.k.a. TheFl0w. […]
Read full article at https://wccftech.com/playstation-5-linux-breakthrough-reshape-consoles/

It's been almost a year since Studio Asobo and Focus Home revealed Resonance: A Plague Tale Legacy, the third game in the peculiar action/adventure series. After a long silence, the French studio has shared a lot of info about this prequel, including its development status, in a fresh devblog. There's good news on that front, as Resonance: A Plague Tale Legacy is in the final production phase with the last recording sessions and the final notes of the score wrapping up. The blog post teases that "the release is almost here", suggesting it will be released this year, possibly ahead […]
Read full article at https://wccftech.com/resonance-plague-tale-legacy-release-almost-here/

AIKissfiy turns one or two photos into realistic kissing videos and GIFs. Upload clear face images, click Generate, and get results in 10–30 seconds. The platform supports JPG, JPEG, WEBP, and PNG, and lets you download MP4s to share on TikTok, Instagram, or Snapchat. Use it for romantic surprises, social content, character reimagining, or avatar interactions. Images are processed over encrypted connections and deleted within 24 hours.
URL to Video converts Shopify, Amazon, TikTok Shop, and Etsy product pages into ready-to-run video ads in minutes. It analyzes a product URL, extracts content, generates scripts, and assembles creatives using proven ad frameworks. You can access a library of winning ads to replicate styles, add UGC-style AI avatars, voice cloning, and text-to-speech, then export platform-ready MP4s. Ecommerce brands, dropshippers, and performance marketers use it to scale variations and launch campaigns faster.
Inquir is a serverless platform to deploy AI agents, REST APIs, cron jobs, and webhooks without managing infrastructure. Write functions in Node.js, Python, or Go, click deploy, and get a live endpoint with an API gateway, schedules, secret management, and observability built in. Hot containers keep frequent routes warm, isolation improves security, and predictable per-invocation pricing helps control cost. Teams can ship AI pipelines, Stripe webhooks, and backend tasks quickly and easily.
Open-source Terminal UI, just record & get exhaustive tests
The AI design agent that works on your canvas
Secret Mode has removed Denuvo from Star Wars Galactic Racer It looks like Denuvo has been removed from Star Wars Galactic Racer, the upcoming Star Wars-themed high-speed racing experience. All references to the controversial anti-tamper technology have been removed from the game’s Steam page. This suggests that the game’s publisher has decided against using Denuvo. […]
The post Denuvo Removed from Star Wars Galactic Racer ahead of launch appeared first on OC3D.
A Sony spokesperson has finally provided an official response to the PlayStation Online DRM controversy. Gamespot received the following statement after asking for some much-needed clarification: Players can continue to access and play their purchased games as usual. A one-time online check is required to confirm the game's license, after which no further check-ins are required. The controversy began in late April when several PlayStation users noticed a 30-day countdown timer appearing on the license information page of newly purchased digital games on PS4 and PS5. The timer displayed a "Valid Period" start and end date, along with "Remaining Time," […]
Read full article at https://wccftech.com/sony-playstation-drm-official-response-one-time-check/

A French retailer is now selling defective RTX 5090 GPUs starting at €1499, but once purchased, you can't even return them for a refund. Defective RTX 5090 GPUs Cost Half As Much As A New RTX 5090, But You'll Be Lucky To Get One Running LDLC, a French retailer, has started selling defective NVIDIA GeForce RTX 5090 GPUs. The term defective could mean a lot of things, but to make things simple, LDLC says that none of the cards work, and it's up to buyers to figure out a way to get them to work. The two variants that have […]
Read full article at https://wccftech.com/french-retailer-lists-defective-nvidia-rtx-5090-gpus-starting-at-e1499-no-refunds/

TrackinV helps you track your full investment portfolio across multiple brokers in one dashboard. Import transactions via CSV or manual entry and see time-weighted returns, benchmark comparisons to S&P 500, MSCI World, and European indices, FX impact, and dividend income. Use portfolio analytics to view sector and geographic allocation, top and bottom performers, and realized vs unrealized P/L. The default currency is EUR but it supports any currency worldwide, so you can compare performance and make better decisions faster.
Learn about AI & watch text become tokens in a node graph
Explore the entire universe in your browser, in real 3D
Turn members into a living relationship graph
Business that Runs Itself
A 128B model for coding, reasoning, and long tasks
Autonomous, goal-driven testing for web & mobile apps
Pin, group, and remove apps easily from your dock
The event invite that actually gets people to show up
Extract web data and automate browsers, no scraper required.
Builds entire dashboards from a single prompt
An open-source spec for Codex orchestration
Real profit tracking for WooCommerce + Google Ads.
Your all-in-one video workflow
AI chat and API that keeps your conversations fully private
One API to build production-ready voice agents
Open-source file storage, sharing, collaboration & syncing
Turn Gmail into Google Tasks with AI-powered
Create studio-quality launch videos with AI
Generate production-ready files directly in your chat
Markdown wit LaTeX in a modern typesetting system
Turn product releases into feature adoption
Non-invasive AI secretary to help without context switching
Deploy pre-built voice and chat agents for support, sales
A foldable phone built for pen-first productivity
AI-native collaborative editor
AI-assisted music creation with built-in discovery, royalty
Start your business using AI in Claude and Replit
Web and MCP research agents, now in Gemini API

The ongoing DRAM shortages are now apparently compelling Apple to aggressively water down its ambitions for the upcoming A20 chip that is slated to power the base iPhone 18, clearly illustrating that even Apple is not entirely immune to the vagaries of the DRAM market these days. Apple's upcoming A20 chip is unlikely to leverage the new WMCM packaging tech that unlocks an unprecedented level of versatility Up until recently, Apple's upcoming A20 chip was expected to make a switch from TSMC's InFO (Integrated Fan-Out) packaging tech, which integrates components like the AP and DRAM onto a single die without […]
Read full article at https://wccftech.com/apple-a20-chip-likely-to-miss-out-on-new-wmcm-packaging-tech-that-allows-for-various-cpu-gpu-core-combos/

ClientProfit helps solo consultants and freelancers track billable and non-billable hours, reimbursable expenses, and overhead to show real profitability per client. It calculates effective hourly rates, allocates overhead automatically, and ranks clients by profit so you know where to focus. You can log expenses, generate invoice-ready reports, and view multi-client dashboards. Start free for up to 2 clients, then upgrade for unlimited clients and advanced insights.
PayAnchor is a paycheck-first budgeting app for workers paid on any schedule. It shows you exactly what is left after every bill — your cushion — and how much is safe to spend each day until your next payday. No bank connection is required; you enter your pay and bills manually and see your cushion instantly. Free to start, Pro at $4.99 per month.
Samsung has broken all records with its financial and operational performance in the first quarter of 2026, as the relentless AI-driven benevolent tailwinds show no sign of losing momentum. To get an idea of just how wild Q1 2026 was for Samsung, consider that its semiconductor operating profit surged by an unbelievable 48x year-over-year on record-breaking demand for memory products, with total operating profit exploding by 756 percent on an annual basis. Samsung Electronics Q1 2026 earnings highlights Here are the main highlights of Samsung's latest earnings disclosure: Commentary Do note that Samsung had pre-released its quarterly sales and total […]
Read full article at https://wccftech.com/samsung-q1-2026-earnings-conventional-dram-more-profitable-than-hbm-right-now/

QueryPlane is an AI-native workspace for querying data, building dashboards, and shipping internal apps on top of your databases. Connect PostgreSQL, MySQL, MongoDB, and warehouses, then generate SQL from plain English, scaffold charts, and compose forms and tables with a drag-and-drop builder. Run it self-hosted to keep data in your infrastructure, enforce role-based access, and track changes with version history. Ship secure, schema-aware tools in hours instead of weeks.
PromptArch helps you build, score, and optimize prompts and context files through guided workflows. Choose from domain-specific builders for agents, coding, marketing, and more, then generate model-tailored instructions with quality scoring and a reusable library. It also creates ready-to-use configuration files for tools like Claude Code, Cursor, Windsurf, Copilot, and ChatGPT. Teams can deploy it company-wide with shared credits and custom domains using pay-per-use pricing.
Shake It On lives in your Mac menu bar and keeps your computer awake by subtly moving the cursor with organic, human-like motion. You can set the shake distance and intervals, and control exactly when it runs with smart conditions like audio playing, CPU load, app matching, Wi‑Fi, or display status.
It pauses automatically on battery, when the screen locks, during Focus or camera use, and can follow a schedule. You can launch it at login, toggle it with a global shortcut, and forget about it. Pay once for lifetime updates, with no subscription.
Listo is a shared memory app for you and your friends. The problem is that the best nights out disappear into scattered photos and old group chats. Listo lets you discover local events, see which friends are going, and automatically build a visual timeline of every experience you've shared together. It creates a living record of your social life. You can add any experience to your timeline in seconds, collaborate with friends and family to crowdsource photos and videos, share with everyone or custom groups, or keep it just for yourself. You can also see which friends were interested in the same events.
The social media giant saw its first decline in daily active usage ever, which it claims is due to new restrictions in some regions.
The U.S. Department of Justice reported that the social media giant was among several groups that contributed to the international operation.

The research underlines a growing body of evidence showing the platform has become more politically contentious since the change from Twitter.
A new short-form video creation app called Divine offers an open-source, artificial intelligence-free platform featuring an archive of more than 500,000 restored Vines.
The update allows subscribers to watch up to four streams at a time on a single screen, and works on most living room and mobile devices.

Preliminary findings indicate the company was unable to prevent children under the age of 13 from accessing Facebook and Instagram.
Elegoo launches multicolour Canvas add-on for its Centauri Carbon 3D printer Better late than never. Over a year after the Centauri Carbon’s launch, Elegoo has officially released their Canvas multicolour add-on for their Centauri Carbon 3D printer (see our review here). This comes after the launch of Elegoo’s Centauri Carbon 2, a new 3D printer […]
The post Elegoo unlocks the Centauri Carbon 1’s Multicolor potential with Canvas add-on appeared first on OC3D.
Super Smash Bros’ first unofficial PC port has arrived Native PC ports of Nintendo 64 classics have become increasingly common, and it was only a matter of time before Super Smash Bros got an unofficial PC version. Using AI, a developer called JRickey has created a working PC version of Super Smash Bros called “BattleShip“. […]
The post Super Smash Bros first unofficial PC port is now available appeared first on OC3D.
The latest PowerToys update brings new tools like Power Display for multi-monitor brightness control and Grab and Move for simpler window handling, along with refinements to modules like Command Palette and Keyboard Manager.
TRIODE CFC is a live visual programming tool for building automation, smart-home logic, and connected device systems without getting buried in code. Wire nodes together, watch your program run in real time, troubleshoot visually, control it from web and mobile dashboards, and let AI draft logic you can inspect and edit on the grid.
Flynomi helps travelers find business and first class flights for less by comparing discounted cash fares, award seat availability, and buy-miles routes in one search. It checks each option against public retail pricing so travelers can quickly see which routes actually save money. Bookings are fulfilled through an IATA-accredited travel agency, and Flynomi also shows award and buy-miles options even when it does not earn a fee.
Alphabet's Q1 2026 earnings put Google Search revenue at $60.4 billion, up 19% YoY, as Pichai tied AI experiences to higher Search usage.
The post Google Search Revenue Grew 19% In Q1, Pichai Cites AI appeared first on Search Engine Journal.
It sounds like Apple is giving up on virtual reality, at least for now According to a report from MacRumors, Apple has “given up” on its Vision Pro VR headset following disappointing sales. The outlet called the VR headset’s recently released M5 version a “flop”, stating that the model “failed to revitalise interest in the device”. Apple’s […]
The post Apple has “given up” on its Vision Pro – report claims appeared first on OC3D.
Update 29/04/2026: Following the publication of the earnings and this article, Xbox CEO Asha Sharma commented on the revenue decline, saying the division knows "we have work to do to earn every player today and into the future." "Xbox earnings today. While we have made progress expanding the business and our margins, player and revenue growth has not yet met our ambition. We know we have work to do to earn every player today and into the future." Original Story: Microsoft reported its fiscal year 2026 third-quarter earnings today, and when it comes to Xbox revenue, while new chief executive […]
Read full article at https://wccftech.com/xbox-revenue-down-again-hardware-revenue-down-microsoft-fy26-q3-earnings/

Vaultmate lets you compare Steam libraries, reveal hidden stats, and discover your best co-op options. Connect with Steam, share a unique link, and instantly see overlap, combined hours, compatibility scores, and games ranked by what you both actually play. Explore personalized recommendations based on real playtime and keep your data private while you find the next game to enjoy together.
Allontas is a budgeting app that helps you plan before you spend, not after. It compares grocery prices across nearby stores using your actual shopping list, lets you plan upcoming expenses like birthdays and holidays before they hit your budget, and gives AI-powered insights into where your money is going.
You can connect your accounts to reconcile real transactions against your plan, set preferred stores for routine shopping, and see weeks ahead instead of reacting after the fact. Build a baseline with income and recurring bills, then adjust with confidence as the month unfolds.
Deus Ex creator Warren Spector is weeks away from releasing his latest game, Thick as Thieves, made by his new team at Otherside Entertainment. The stealth action game that began as a PvPvE game before it went full co-op and PvE is due out on May 20, 2026, and in a new developer deep-dive released today, we found out a pretty significant detail about its impending launch, which is its bargain bin price. While we'll likely continue to be concerned about how much we'll have to pay for GTA 6 until the price is finally revealed, no one looking to […]
Read full article at https://wccftech.com/thick-as-thieves-release-date-price-warren-spector-otherside-entertainment/

Spotted by MP1st, it seems that a small misstep by an artist at WB Games confirms that Injustice 3 is the next game we'll see from fighting game studio NetherRealm. The next entry in the superhero-charged fighting game is a project that we've seen teased for months now, and this latest mention adds fuel to the Injustice 3 fire. Last year, a MultiVersus dataminer teased they had found evidence of Injustice 3, and a few months after that, two of the game's voice actors, George Newbern and Phil LaMarr, who provided the voices for Superman and the Green Lantern, respectively, […]
Read full article at https://wccftech.com/injustice-3-confirmed-leak-wb-games-artist-netherrealm/

Aftersay is a social app built around one idea: not every message should be sent right away. Create a pod in text, audio, or video and lock it to open at a chosen moment. Your recipient gets a notification that something is waiting but doesn't know what or who sent it until the moment arrives. No app download is required, just scan a QR code. Pods can be anonymous or not, revealing the sender only at opening. You can have a public or private profile and choose what the world sees. It works where other platforms don't, such as a parent leaving words for a child's wedding day, a love letter with a future date, or a reminder to your future self.
Buskalo gives Latin American entrepreneurs a real presence on the map in 30 seconds. Millions of LatAm entrepreneurs are invisible online because they are blocked by algorithms, priced out of ads, and ignored by big platforms. Connect your Instagram or TikTok and your business appears where local customers search. No content creation, monthly fees, or tech skills needed. The platform offers verified profiles with reviews and a Buskalo Score, built for the informal economy of Latin America.
PlayStation has revealed the three games that'll be made available for free to PS Plus subscribers at the PS Plus Essential tier for the month of May 2026. While April's games included one title that can be connected to the action games sub-genre of Soulslikes, May's entries include two popular Soulslikes on top of another annual sports title in EA Sports FC 26. The big headliner game for subscribers to kick off April 2026 was Lords of the Fallen, which is the aforementioned Soulslike game for this month. May, on the other hand, has two popular Soulslike titles: WUCHANG: Fallen […]
Read full article at https://wccftech.com/ps-plus-essential-may-2026-ea-sports-fc-26-wuchang-fallen-feathers-nine-sols/

ACE or AI Compute Extensions aim to revolutionize AI by bringing faster matrix-multiply performance as Intel & AMD work toward a unified path for x86 architectures. ACE Is Part of Intel and AMD's Unified x86 Strategy, Driving The Ecosystem In The AI Era With Faster Matrix Acceleration Last year, Intel and AMD partnered to strengthen the x86 ecosystem through their "x86 Ecosystem Advisory Group" initiative. The plan was to offer a standardized set of features across architectures in a bid to make x86 accessible, scalable, and compatible with future requirements. Four key features were announced: FRED, AVX10, ChkTag, and ACE. […]
Read full article at https://wccftech.com/amd-intel-ace-partnership-boosts-ai-performance-standard-matrix-acceleration-architecture-for-x86/

In the world of PC gaming, "badly optimized" is a phrase that gets tossed around more than a grenade in a crowded lobby. Usually, the story goes like this: a shiny new game launches, players crank everything to Ultra, watch their frames per second (FPS) counter like a hawk, and reach a verdict in seconds. If the frame rate isn't high enough to their liking, then the game is deemed "unoptimized". If it runs like butter, then it’s "well optimized". The reality, however, is that PC game optimization is a massive, complicated puzzle. Performance isn't just about how hard a […]
Read full article at https://wccftech.com/the-truth-about-pc-game-optimization/

OKAtlas is an active intelligence layer built on your company’s data. It continuously watches across your tools, synthesizes changing situations, and surfaces live risks and opportunities. Ask one prompt to get answers grounded in your context, backed by vector search, a knowledge graph, and temporal state. Go from insight to action without switching context. Connect in minutes and get enterprise-level insight from day one.
It solves the problem of getting buried in context across multiple tools and removes inefficiency.
A new Google Photos feature catalogs the clothes in your wardrobe from your library.
Move from a brainstorm to a polished document, sheet or PDF with a single prompt in Gemini. 
In November 2024, with SE Ranking’s research team, we began a 16-month experiment to test how AI-generated content performs in organic search. We launched 20 websites across different niches and tracked their performance over time.
But we didn’t stop there.
We wanted to look beyond rankings and understand how AI systems discover, interpret, and cite information. So we expanded the project into a more ambitious set of experiments on AI search and LLM visibility.
For the next phase, we created a new fictional brand in a real niche with real competition to see how quickly AI systems would pick it up and whether it could be cited alongside or above trusted industry leaders and government sources.
After the first month, several patterns became clear.
We created a fictional brand and published content about it across:
Across these sites, we tested seven content formats:
We started publishing in March 2026 and tracked how five AI systems responded: ChatGPT, Google’s AI Overviews, Google’s AI Mode, Perplexity, and Gemini.
In total, we tracked 825 prompts across different query types and scenarios, which generated 15,835 AI answers during the first month.
For each prompt, we looked at three things:
This experiment is still ongoing, and the first month was designed to see how AI systems respond to newly created, fully available information tied to a fictional brand.
One of the clearest takeaways from the first month is that a brand-new site has limited chances of competing for broader, non-branded topics, even in a niche with relatively low competition.
AI systems did pick up our fictional brand quickly, but most of that visibility came when the query was already connected to the brand itself, whether through:
Specifically, out of all AI answers, 96% (15,553 out of 15,835) came from branded searches.
Non-branded informational queries produced just 4% of AI answers in total, and even those mostly came through our supporting test domains.
The pattern was even stronger on the main fictional brand site itself. There, we recorded:
That is a 1,700x difference.
This feels familiar because it mirrors classic SEO. New brands still need time to earn trust, build recognition, and compete for broader topics. When AI systems answer general industry questions, they tend to rely on established and authoritative sources.
This is why the strongest results in our experiment came from prompts tied to information only our brand could answer, such as how the product works, how often it updates, and so on.
These queries alone generated 11,430 AI answers with citations to our brand, accounting for 72% of allvisibility in the experiment.
The reason is simple: there is no competition.
If a query is something like “Was [Brand Name] originally built as an internal tool?”, only one source can realistically answer it. AI systems don’t need to compare sources, evaluate authority, or resolve conflicts.
That gave our fictional brand a major advantage. Even with no domain authority, it outperformed established competitors (DT 40+) by up to 32x on these queries.
What all this means for marketers and business owners is that when users ask about your brand, AI systems are likely to rely on your website as one of the main sources of information. So, the content they cite should be fully aligned with how you want your brand to be positioned.
Our experiment supports this. The “Complete Guide” page on the main site appeared in 1,799 AI answers (the highest result in the dataset) largely because it consolidated key brand information in one place. The “About Us” page followed with 1,500 AI answers. Together, these were the most cited URLs from our main domain, with LLMs relying on them 3–5 times more often than the additional domains.
In practice, AI systems may learn about your brand quickly, but what they learn depends on what you publish. Your core pages should clearly answer all the questions that are important for your brand: who you are, what you offer, and how you’re different.
This way, you can start shaping your narrative in LLMs even as a new or small brand, before you have the authority to compete for broader industry topics.
Another strong pattern in the experiment is that the five AI systems do not behave alike. They vary not just in how often they mention the fictional brand, but in how quickly they pick it up, how consistently they cite it, and which domains they prefer as sources.
Google AI Mode was the most reliable engine in the dataset.
Throughout the experiment, it placed our domain in position 1 for branded queries in about 90% of cases. Unlike other engines, it did not show major fluctuations or dependency on other test domains.
If there was one place where direct brand visibility was predictable, this was it.
Google’s AI Overviews also surfaced our tested domain for branded queries, but the pattern was less consistent.
We saw our brand appear in position 1 for 14 days for some prompts, followed by a drop mid-month that didn’t recover. More broadly, mentions and links for branded queries fluctuated heavily, appearing and disappearing multiple times each week.
Yet when links were included, it accurately described the brand. When no links were shown, it often claimed there was no public information available.
The takeaway here is not that AI Overviews failed to recognize the brand. It did. But that visibility was harder to sustain over time.
Perplexity was the breakout engine for fresh content.
It picked up newly indexed pages within 1–3 days, which clearly made it the primary driver of early visibility within our experiment.
But this speed comes with a tradeoff.
Instead of consistently citing pages from our main domain, Perplexity often used our supporting test domains as sources.
In early March, our main brand held position 1. But as we published more content on supporting domains, those domains gradually replaced it in AI citations.
By the end of the month,six different domains were being cited: our main brand site and five supporting test domains where we had published additional content about the fake brand.
So while Perplexity increases overall visibility, it doesn’t always send that visibility directly to the main brand site.
ChatGPT showed the most noticeable progression over time.
At the beginning of March, there were no links or mentions of our brand at all. But as the month progressed, visibility steadily increased.
This growth was especially clear across specific content types:
Overall, ChatGPT didn’t immediately recognize the brand. Once it recognized the brand, ChatGPT began surfacing it frequently, especially for branded prompts.
Gemini was the weakest engine in the dataset and the least consistent.
Initially, it struggled to identify our niche correctly. However, the results improved when we changed how we asked the questions. When prompts were framed as comparisons (“X vs Y”) or reviews, Gemini was much more likely to recognize the brand correctly.
Even then, the results were still limited. In the best-performing scenario (queries based on unique claims about the brand), Gemini failed to include any citations to our brand in about 60% of responses.
Next, for this experiment, we tested seven different content types across both our main site and supporting test sites.
And what we found is that comprehensive, in-depth content earns far more AI citations than shorter articles.
The strongest-performing formats were:
This does not mean there is one ideal content length or that longer pages automatically perform better. The stronger results likely came from the depth, structure, and completeness of the information these formats provided.
This finding also aligns with our broader research, where we’ve seen that detailed, well-structured content performs better across platforms like AI Mode and ChatGPT.
Pages with narrower or less comprehensive coverage generated fewer citations overall. For example:
As part of the experiment, we also tested a “spam” approach: publishing 30 thin pages (500–750 words each) on one of our test domains.
Individually, these pages were weak (averaging just 63 AI answers per page).
But together, they generated 1,897 total AI answers, which makes it the highest-performing content setup at the domain level.
However, thin content is not inherently “better” because of this result. It just shows that volume can sometimes compensate for quality by increasing the likelihood of retrieval and citation (especially in AI engines like Perplexity that prioritize freshness).
In simple terms, a few strong pages win on quality, but a large number of weaker pages can still win on overall exposure.
One of the most useful negative findings came from the content structure test.
For this part of the experiment, we created a hub page on one of our test domains and linked it to 10 supporting articles. In theory, this setup should have built strong topical depth and semantic reinforcement. All 11 pages were indexed, properly structured, and internally linked.
Yet, they generated zero AI citations.
This is significant because it challenges a common assumption carried over from traditional SEO: that topical clustering automatically improves authority or increases the likelihood of being retrieved.
At least in this experiment, it did not.
That does not mean topic clusters are useless. It means they are not sufficient alone. Internal linking and semantic breadth may help a search engine understand a site, but AI systems still need a reason to retrieve and cite a specific page for a specific answer.
Even within just one month, the results point to a clear conclusion:
AI systems appear to respond more strongly to consistency, repetition, and availability than to strict verification.
That should not be overstated. It is not that LLMs “believe anything.” But if a claim is:
Then AI systems may surface it surprisingly easily.
We also saw this in manual checks of LLM responses in AI Results Tracker. For prompts such as “is [brand] worth it,” some systems responded positively and recommended using our completely unknown fictional brand.
It may not be because LLMs automatically favor every new brand. In some cases, when little or no negative information exists, a system may fill the gap with a neutral or positive-sounding response based on the limited signals available.
But the result is the same: if a completely fictional brand can generate consistent citations and favorable recommendations under certain conditions, then brand narratives in AI search may be more flexible than they seem.
The most important outcome of this experiment isn’t that a fictional brand achieved visibility.
It’s that visibility followed a repeatable pattern once specific inputs were introduced: branded context, unique claims, diverse content formats, and sufficient presence across different sources.
That leads to two important conclusions.
If there’s one lesson here, it’s that you can’t assume AI systems will accurately represent your company, product, or category by default.
You have to actively shape the information environment they rely on.
And this is only the first month of results. We’re continuing to collect data, expand the experiment, and monitor how these patterns change over time.

Intel confirms performance and efficiency gains with new 18A-P node Intel plans to discuss its Intel 18A-P (18AP) lithography node in detail at the 2026 IEEE/JSAP symposium on VLSI technology and circuits. In their session briefing for the Intel 18A-P session, Intel has confirmed that their new 18AP node delivers notable performance and efficiency gains […]
The post Intel confirms big performance and efficiency gains with Intel 18AP node appeared first on OC3D.
Yesterday, we reported on a claim that Spiders, the studio behind the Greedfall series and Steelrising, would be shutting down "soon" after it filed for insolvency last month. The shutdown was allegedly a result of its parent company Nacon being unable to find a buyer for the studio after the insolvency filing. Today, Spiders finally spoke out via its official X (formerly Twitter) account, and while it didn't confirm anything about the failed sale, it did confirm that yes, "Spiders is being liquidated." The studio's statement began with an apology for the team's silence over the last couple of months, […]
Read full article at https://wccftech.com/greedfall-makers-spiders-confirms-shut-down-nacon/

British developer Supermassive Games is about to launch Directive 8020, the fifth mainline game in its The Dark Pictures Anthology horror game series, following Man of Medan, Little Hope, House of Ashes, and The Devil in Me (there was also the PS VR2 exclusive Switchback). Directive 8020 is special within the franchise for at least two reasons: it's the first entry to drop the anthology name "The Dark Pictures" from the title, and it's also the first one to take place in a sci-fi setting. Set in the near future, the game follows the five-person crew of the Cassiopeia, a colony reconnaissance […]
Read full article at https://wccftech.com/directive-8020-the-thing-body-horror-supermassive/

Microsoft is working on a new Windows 11 project called "K2," which will reduce bloat while increasing overall performance, including gaming. Microsoft Windows 11 "K2" Project Will Be A Step In The Right Direction, Aims To Reduce AI, Bloat, & Improve Overall Performance, Including Gaming The Windows 11 operating system has had its ups and downs. The general backlash has been there since the beginning, with features such as Recall and an integral focus on AI. These have had a negative impact, and with the ongoing "Windows Update" issues, the current Microsoft OS is far from perfect. The software giant […]
Read full article at https://wccftech.com/microsoft-wants-to-bring-steamos-level-of-gaming-performance-to-windows-11-cutting-back-ai-bloat-k2-project/

Korean semiconductor giant Samsung has achieved 80% production yield with its 8-nanometer chip manufacturing process, according to a media report from the Seoul Economic Daily. Samsung's yields are a frequent feature of industry discussion, with multiple media reports highlighting the firm's struggles with production efficiency. A process technology's yield is a crucial metric of its commercialization, and lower yields often lead to foundries having to bear the cost of defective products which their customers are unable to use or sell. NVIDIA Backed Groq Relying On Samsung For Its Chip Production Requirements According to the details, NVIDIA backed Groq has ordered […]
Read full article at https://wccftech.com/report-samsung-hits-80-yield-on-4nm-process-as-nvidia-backed-groq-ibm-and-baidu-pile-onto-its-foundry/

Mass layoffs have been, and will unfortunately likely continue to be an issue that plagues the video game industry in the short and long term. After the influx of investment that flooded the industry during the COVID-19 pandemic dried up, the thousands of layoffs we've seen over the last few years have decimated the video game industry and driven developers away from games due to the lack of job security. But, according to Maria Sayans, the chief executive officer of Ustwo Games, the team behind the Monument Valley games, job security is the price the industry needs to pay to […]
Read full article at https://wccftech.com/monument-valley-studio-ceo-hates-contractor-shift-but-calls-job-security-romantic-idea-to-abandon/

Apple has been making the most of the ongoing memory 'chipflation' by freezing the prices of its sprawling portfolio of products in a bid to gain market share. But this does not mean that the iPhone manufacturer is immune to the ongoing biting cost surges. As a matter of fact, despite recent corroborative commentary, you should not count on the base iPhone 18 sporting a 12GB RAM, especially as memory costs are slated to make up a whopping 45 percent of a given iPhone's Bill of Materials (BOM) by next year. LPDDR5, which made up just 10 percent of a […]
Read full article at https://wccftech.com/a-180-ram-bill-might-force-apple-to-stick-with-an-8gb-iphone-18/

For a configuration including a powerful CPU like Ryzen 9800X3D and one of the best mid-range GPUs on the planet, a price tag of just $1,100 seems like a steal. User Buys Ryzen 9800X3D-RTX 5070 PC Build With 32 GB DDR5 and 2 TB SSD for Just $1,100 on Costco Once again, we see another lucky buyer snagging a complete PC build for almost half the price one would have to pay in the RAMpocalypse era. We used to see such stories a lot in recent weeks, but these are still rare and occasional. While some users are happy to […]
Read full article at https://wccftech.com/costco-shopper-walks-out-with-a-ryzen-9800x3d-rtx-5070-pc-build-for-1100/

Snowy is a conversational AI for Tesla drivers. It pairs with your phone, talks through your Tesla's speakers, and integrates with the car's navigation and display. You can ask anything, send destinations to your nav, or get live news, weather, sports, and stock prices without taking your hands off the wheel. Unlike Tesla's Grok, Snowy works on every Tesla built since 2017, including the Intel-Atom cars Tesla's AI rollout has left behind. It is made independently and powered by OpenAI.
Google TV adds tools to create fun images and videos, fresh ways to showcase Google Photos — and soon a new row to stream short videos. 
Automation doesn’t fail on its own — it does exactly what it’s trained to do. The problem is that when Google Ads is fed incomplete, misaligned, or overly broad signals, it can optimize toward the wrong outcome faster than most advertisers realize.
In our second installment of SMX Now, our new monthly series, Ameet Khabra of Hop Skip Media will break down a real account where a 417% jump in conversions turned out to be the wrong kind of success. She’ll use that case study to explain the four key ways automation drift enters an account: signal drift, query drift, inventory drift, and creative drift.
You’ll leave with a practical framework for diagnosing drift early, understanding where human oversight matters most, and managing automation more deliberately so it works toward real business goals — not just platform-reported wins.
Join us May 6 at noon ET.

SEO sits at an interesting crossroads. One camp insists on optimizing for large language models (LLMs) and AI engines, and the other insists on doing SEO the same way we’ve always done it.
But there’s another way to approach it: combining the fundamentals of SEO with an understanding of how LLMs operate and why.
With this approach, you can keep what’s always worked — like on-page SEO and backlinks from reputable sources. Yet you can also look ahead to new tactics, such as optimizing for query fan-out and emerging prompt intents.
Since 2023, and the rise of tools like ChatGPT, Gemini, Claude, and Perplexity, I’ve been researching how AI engines display search results and where SEO is headed.
Here’s what I’ve found, and how you can use it to rethink your approach to a future where AI SEO considers human behavior at its core.
The Red Queen evolutionary model says that for everything to stay the same over time, everything must change. But as you adapt to the changing environment, so does the competition.
As a result, you and your competitors remain the same distance apart. In your attempt to become the predator, your prey adapts in equal measure, leaving the status quo firmly in place.
Essentially, if you don’t adapt, you’ll get eaten.
Along the same lines, AI search is a natural progression of what has existed for at least a decade. A hybrid search model has been in place since 2015, with the introduction of RankBrain.
That’s why many of the same SEO tactics still work now. Instead of a fundamental change, a series of big and small shifts has taken place over time.
For example:
“Stop optimizing for ‘AI,’” says Britney Muller via LinkedIn.
So, what makes a worthy source for LLMs? What are people using AI assistants to accomplish? Is it to find information, analyze an issue, or create a list of recommendations?
Research from Moz shows that only 12% of AI Mode citations mirror the URLs in organic results. This means AI engines only somewhat follow the traditional rules of SEO. And over time, these changes will likely become more extensive.
While Google denies that the search engine will be entirely generative, my prediction is that Google will continue along a generative path that encompasses AI assistant behavior, such as questions, actions, analysis, and creation.
As a result, your short- and long-term strategies must work together to remain innovative yet grounded.
Focusing on human behavior and traditional search while working to understand LLMs is how you worship the Red Queen.
The most effective approach is focusing on where LLMs fall short: their limited databases. Their systems rely on retrieval-augmented generation (RAG) to address gaps in their databases without requiring constant retraining.
AI assistants like Google AI Mode and Gemini need RAG to prevent hallucinations and to continue surfacing relevant answers for consumers.
Here, I gave Google AI Mode and ChatGPT the same prompt:

Both returned relevant results, but the specifics differed. Google AI mode returned anti-aging tips and routines, while ChatGPT sourced anti-aging products.
They also used different sources for their information. Where ChatGPT preferred a fresh Today.com source, Google referenced dermatology websites and even Google Shopping listings.
In both instances, the AI assistants needed external sources.
For SEO, you need to understand how your content aligns with the limitations of AI engines. They do the searching for themselves and then generate a response for the user, only showing external sources some of the time.
It’s a subtle shift in thinking. Optimizing for search is less about crafting SEO content and more about becoming a trusted supplier for these LLMs — so when people enter a prompt, your brand shows up in the answer.
In that way, the Red Queen evolution involves studying AI answers, learning their quirks, comparing their preferences, and evaluating their most common intents.
Then, you can feed the database. Make sure Google, which has the largest database of any LLM, has sufficient data to keep you in the pool of trusted sources.
Without people, AI assistants have no power. That’s why you have to put people first.
Where are people using AI assistants to create, achieve, build, search, and prompt? And where does it make sense for your brand to be?
Now that the AI search landscape is more competitive, you have to think like a social media professional or a traditional marketer.
A short-term SEO strategy can work now, in the overlap between traditional and AI search. It uses topical authority to deliver results immediately, shortening clients’ time to success. Here’s the short-term plan.
Internal links help search engines understand your site’s overall structure. AI Mode, for example, is built with vector search models, and entities are crucial to their operation.
Vector search puts your website’s information into a 3D model, allowing algorithms to go beyond keywords and determine the intent behind someone’s search. Internal links help strengthen these signals.
As Gianluca Fiorelli suggests:
Links have long mattered for search, and they still do. As you develop your long-term SEO strategy, they become increasingly important for surfacing your content in LLMs and AI assistants.
Plan your topical authority through these four lenses:
These are all based on traditional SEO tactics. However, they consider a hybrid or LLM-based approach versus focusing solely on organic search.
Technical health is rooted in what works for search now: site speed, schema markup, and optimized titles and descriptions.
After all, LLMs are expensive to maintain and run. It’s in their best interest to use resources that are fast and easy to extract information from.
Consider recent site speed findings from Mike King, who notes, “Slow responses can trigger 499 errors, where the AI stops waiting.”
These three short-term goals — topical coverage, internal links, and technical health — are all important for visibility in LLMs and AI engines.
But search has evolved because human behavior has changed. So, the long-term play involves adapting to human behavior.
Long-term SEO strategies should focus on the intent and actions of human behavior surrounding AI.
The four traditional search intents (informational, navigational, commercial, and transactional) are still relevant. But AI search has added a few more.
According to MIT, examples include zero-shot, instructional, and contextual prompts. Grammarly considers other intents, including educational, opinion-based, and problem-solving.
I tend to break down intent into multiple categories of SEO opportunity based on the clients I’m working with. Some common examples include directional, recommendation, local, booking, and shopping.
Once you identify the most relevant search intents, you can hypothesize what people are looking for the generative engine to do. From there, you can do one of two things:
Say your target customers are U.S. home buyers. They want to know: “Is now a good time to buy a house?”
Plug the prompt into an AI engine and study the AI-generated answer. In AI Mode, for example, you can infer that Google fans out across multiple topics, including market conditions and pros and cons.

ChatGPT, in contrast, looks at trends, forecasts, and seasonality.

Based on the data, develop a content strategy that supports query fan-out behavior.
For example, you can break down the complexities of buyer’s markets, buyer and seller perspectives, or the changes in rising inventories. You could even build a useful tool around mortgage rates or national home price trends.
I use a variety of tools to help with analyzing query fan-out. But the most popular options include Semrush, Ahrefs, and Profound.
Prompting may not even be a concern in the future if AI assistants become more sophisticated at solving problems rather than responding to prompts.
Instead, AI engines may be able to anticipate searchers’ needs and intentions, according to Harvard Business Review. That means it may be increasingly helpful to focus less on prompts and more on problems.
In the absence of keyword research, it will be more important than ever to analyze human behavior, evaluating and pivoting based on how people use AI assistants.
It’s helpful to consider how social media professionals and brand experts think creatively about where their audiences are and how to attract attention while building brand power and recognition.
For example, Rare Beauty and Rhode have both grown their brands with creativity and consumer listening, especially in the last six years.
They’ve put considerable effort into brand campaigns, public relations (PR) campaigns, TikTok content, and in real-life (IRL) experiences that have gone viral globally.
Looking at ChatGPT, the first product recommended for “best makeup gifts for Gen Z” is Rare Beauty.

Google makes similar recommendations, with Rare Beauty and Rhode leading the list. The results are influenced by PR coverage and social media virality.

SEO will have a future as long as there are search engines with AI experiences. While it might look like SEO has become the prey, it’s evolved just as much as the predator has.
Everything’s changed. Yet everything’s the same.
Internal linking is one of the most controllable levers in technical SEO. But when tracking parameters are embedded in internal URLs, they introduce inefficiencies across crawling and indexing, analytics, site speed, and even AI retrieval.

At scale, this isn’t just a “best practice” issue. It becomes a systemic problem affecting crawl budget, data integrity, and performance.
Here’s how to build a case study for your stakeholders to show the side effects of nuking tracking parameters in internal links — and propose a win-win fix for all digital teams.
Crawl budget is often misunderstood. What matters isn’t the volume of crawl requests, but how efficiently Google discovers and prioritizes valuable pages.

As Jes Scholz pointed out back in 2022, crawl efficacy indicates how quickly Googlebot reaches new or updated content. Inefficient signals, such as low-value or parameterized URLs, can dilute crawl demand and delay the discovery of important pages.
Tracking parameters like utm_, vlid, fbclid, or custom query strings work well for campaign tracking. But when applied to internal links, they force search engines to process additional URL variations, increasing crawl overhead.
Crawlers treat every parameterized URL as a unique address. This means:
Search engines must still crawl first, then decide what to index.

Tracking parameters can quickly escalate a single URL into many variations by combining different values, creating a large number of duplicate URLs. This leads to:

On large websites, this becomes a critical issue. Googlebot has a limited number of crawl requests per website. Any time spent crawling parameterized URLs reduces the opportunity to crawl the most important pages, even the so-called “money pages.”

Granted, crawl budget is typically a source of concern for larger websites, but that doesn’t mean it shouldn’t be ignored on sites with 10,000+ pages. Optimizing for it often reveals more room for efficiency gain in how search engines discover your content.
A common misconception is that canonical tags “fix” parameter issues and “optimize” crawl efficacy. They don’t.
Canonicalization works at the indexing stage, not at the discovery stage. If your internal links point to parameterized URLs:

This is why parameter-heavy sites often show patterns like:

Crawl budget is not the only culprit here.
Ironically, tracking parameters in internal links can corrupt the data they are meant to measure.
When a user lands on your site via organic search and then clicks an internal link with a tracking parameter, the session may break down and be reattributed.
Anecdotally, Google Analytics 4 resets a session based on campaign parameters, whereas Adobe Analytics does not.
This creates several downstream issues. Attribution becomes fragmented, especially under last-click models, where credit may shift away from organic entry points to internal interactions.

As performance is split across URL variants, page-level SEO reporting becomes unreliable and creates a disconnect between organic SERP behavior and what actually happens when a prospect lands on your pages.
One of the most overlooked risks is backlink fragmentation. If internal links include tracking parameters, users may share those exact URLs. As a result, external backlinks may point to parameterized versions of your pages rather than the canonical ones.
This means authority is split across URL variants, some signals may be lost or diluted, and search engines may treat these links as lower value. Over time and in large proportions, this is set to weaken your backlink profile.

Nonetheless, it piggybacks on the above tracking problems. Those external backlinks carry internal UTM parameters into external environments. This permanently fractures session attribution and wastes crawling resources.
Using UTM parameters in your internal links is more than just a crawl overhead. It also strains your caching system.
Each URL with parameters is essentially a different page with its own cache entry. That means the same content may be fetched and processed multiple times, increasing load on both servers and CDNs.

This becomes even more critical with AI crawlers and LLM retrieval systems. It’s understood that many of these agents fetch content at scale and have limited rendering capabilities, making them more sensitive to parameterized URLs.
As the web is increasingly consumed by aggressive AI bots, having internal links with tracking parameters leaves traditional web crawlers and RAG-based systems wasting bandwidth on duplicate cache entries for pages that serve the same purpose.
At the same time, many of these systems rely heavily on cached versions and avoid rendering JavaScript due to architectural and cost constraints at scale.

This makes URL hygiene a foundational requirement, not just a technical preference.
On the cache front, Barry Pollard recently suggested a smart workaround that Google has been testing for a while.

Granted that removing those parameters results in identical content, helping the browser reuse a single cached response can dramatically improve Time to First Byte (TTFB), a metric that directly affects your Core Web Vitals.
Some CDNs already strip UTM parameters from their cache key, improving edge caching. However, browsers still see each parameterized URL as a separate asset and will request them one by one.
The No-Vary-Search response header closes this gap by aligning browser caching behavior with CDN logic. Implementing it allows browsers to treat URLs with specific query parameters as the same resource. Once set, the browser excludes the specified parameters during cache lookups, avoiding unnecessary network requests.
In practice, the header signals which parameters to ignore when determining cache identity. The only caveat is that it’s supported in Google Chrome +141, with support coming in version 144 on Android. If most of your organic traffic comes from Chromium-based browsers and you run paid campaigns, this is worth adding now.
While canonicalization to the clean URL version isn’t a long-term solution, it remains the standard requirement. If you’re stuck in such a position, it’s likely a symptom of deeper architectural challenges at the intersection of SEO, IT, and tracking.
Either way, the preferred solution is to move measurement from the URL layer into the DOM layer.
This can be achieved successfully using a good old HTML workaround: data attributes.

This configuration allows tracking tools (e.g., tag managers) to capture click events and user interactions without altering the URL. Plus, it ensures internal links point to the canonical version without introducing duplicate cache entries.
Dig deeper: How the DOM affects crawling, rendering, and indexing
| Benefit | Stakeholder |
| Enables clean internal link URLs and unbreakable tracking | SEO, analytics, product managers |
| Robust against CSS changes for page restyling | Web developers, product managers |
| Do not interfere with providing structural or semantic meaning to screen readers and search engines | Product managers, SEO |
| Easy to embed directly onto an HTML element | Web developers, analytics |
| Acts as a hidden storage layer for tracking data, allowing tools to capture interactions via JavaScript without exposing parameters in URLs | PR, affiliates, analytics |
Tracking parameters in internal links is a legacy workaround, often rooted in siloed teams and flawed site architecture.
However, they create downstream issues across the entire organization: wasted crawl budget, fragmented analytics, diluted backlink equity, and degraded web performance. They also interfere with how both search engines and AI systems access and interpret your content.
The solution isn’t to optimize these parameters, but to remove them entirely from internal linking and adopt a cleaner, more robust tracking approach.
Using a good old HTML trick sounds just about the right fix to win over traditional search engines, AI agents, and especially your stakeholders.
Note: The URL paths disclosed in the screenshots have been disguised for client confidentiality.


Here’s what you need to run The Blood of the Dawnwalker on PC Bandai Namco has unveiled that The Blood of the Dawnwalker is coming to PC and consoles on September 3rd 2026. On PC, the game will be available on Steam, and Rebel Wolves has provided detailed PC system requirements for the game. The […]
The post Is your PC ready for The Blood of the Dawnwalker appeared first on OC3D.
Notepad++ is now available on macOS as a free open-source app. It uses the same core editing engine, bringing familiar features like syntax highlighting, plugins, and macros, while integrating with macOS for better performance.
A little more than a week after unionized members of the Build a Rocket Boy staff took legal action against the studio's leadership for installing spy software on work devices that allegedly violated data protection laws, Chris Wilson, a former animator who worked at the studio for the last six years and has worked in the video game industry for more than two decades has come forward to share just exactly what it was like working on MindsEye under former Rockstar producer Leslie Benzies and his co-chief executive officer, Mark Gerhard. In a massive interview with Kotaku, Wilson is very […]
Read full article at https://wccftech.com/mindseye-dev-speaks-on-alleged-corporate-sabotage-was-just-hate-mail-to-build-a-rocket-boy-ceo/

Memory makers are enjoying a huge boost to revenue as AI demand has earned them more in a single quarter than the entire previous year. Memory makers such as ADATA saw a 17x Annual Growth In Profits, Others Also Seeing Similar Boost From AI Boom The AI crunch continues to devastate the consumer markets as component prices go up; at the same time, memory makers are seeing an astronomical rise in their profit margins. So there are two things to factor here, first is that memory demand is at an all-time high due to AI firms requiring more DRAM for […]
Read full article at https://wccftech.com/memory-manufacturers-earned-more-in-q1-than-all-of-last-year-prices-to-spiral-up/

TSMC is all set to double its 2nm production capacity through five state-of-the-art fabrication plants to meet global AI and chip demand. TSMC's 2nm Output To Be 45% Higher Than 3nm At The Same Stage As Production Ramps Up Recently, we shared how TSMC is gearing up to boost its 2nm and 3nm wafer output aggressively by the end of 2026. Now, the company is reportedly going to double its 2nm capacity output to meet "explosive" demand for AI and compute. As such, TSMC has set up five wafer fabs, all entering ramp-up phase this year towards 2 nano-meter proceses. […]
Read full article at https://wccftech.com/tsmc-doubles-down-on-2nm-five-fabs-ramping-at-once-output-eclipse-3nm-by-2x/

Clera is an AI-powered talent agent that matches candidates with startup roles and introduces them directly to founders and hiring managers. Share your experience, preferences, and dealbreakers to receive curated opportunities with context instead of cold applications. It’s free for candidates because companies pay upon hire. Chat to define goals, review matched roles, accept intros, then move quickly to interviews. Clera also offers tools like a resume creator, career coach, and salary calculator to help you prepare.

Ranking and visibility are no longer the same thing. For 20 years, SEO teams optimized for SERP position. Higher rankings meant more visibility, more clicks, and more traffic. That relationship is breaking down.
Earlier this year, Ahrefs found that only 38% of pages cited in Google AI Overviews also ranked in the traditional top 10. Eight months earlier, that number was 76%.
The implication is straightforward: being highly ranked no longer guarantees being seen.
In AI-generated answers, visibility is determined by inclusion — and by how your brand is represented when it appears. That representation is determined by a different set of signals.

Four distinct patterns determine how brands appear inside AI-generated responses:

When an AI model lists three CRM options, the order matters. Up to 74% of users choose the AI’s top recommendation, according to a Growth Memo and Citation Labs AI Mode study.
This reinforces how heavily people rely on the first option presented.

About 26% of users overrode the AI’s order entirely when they recognized a brand they already knew. This is a shift from how users behave in traditional search. And 56% of users built their own shortlist from multiple sources. In AI Mode, 88% took the AI’s shortlist without checking further.
The AI’s curated answers carry that much weight. But mention order isn’t stable. SE Ranking’s August 2025 analysis found that when you run the same query three times, AI Mode only overlaps with itself 9.2% of the time.
The sources change. The order changes, sometimes dramatically.
The lesson: Mention order creates an advantage, but it isn’t deterministic. Brand recognition can trump position.
The SEO toolkit you know, plus the AI visibility data you need.
Not all mentions are created equal. Some brands get a single sentence. Others get a full paragraph explaining their strengths, use cases, and differentiators.
The difference comes down to how much citation-worthy information AI systems found about you.
When Semrush announced its AI Visibility Awards in December 2025, it analyzed more than 2,500 prompts run through ChatGPT and Google AI Mode. Category leaders like Samsung in consumer electronics didn’t just appear more often. They got more detailed descriptions when they did appear.
Challenger brands like Logitech in gaming accessories showed up, too, but typically with shorter mentions focused on a single differentiator.
The top 4.8% of URLs cited 10+ times by ChatGPT share a common trait. They’re comprehensive pages that answer “what is it,” “who uses it,” “how to choose,” and “pricing” in a single URL.
Word count seems to matter, too. Pages above 20,000 characters average 10.18 citations each. Pages under 500 characters average just 2.39.
The lesson: If AI systems have thin data about your brand, you get thin mentions.
AI systems don’t just cite sources. They characterize them by tone, which reveals how much confidence the AI has in your authority.
HubSpot’s AEO Grader, launched in early 2026, classifies brands into competitive roles: leader, challenger, or niche player. They’re positioning labels that determine how persuasively AI presents you.
Semrush’s awards data showed that category leaders have less than 20% monthly volatility in AI share of voice. Once AI systems establish you as a leader, that perception tends to stick.
The language reflects this correlation.
Most brand mentions in AI answers are neutral or positive. But neutral isn’t the same as enthusiastic.
The difference between “also offers project management features” and “considered one of the top three project management platforms” is authority signaling.
The lesson: AI doesn’t just say your name. It frames your reputation.
Comparative positioning is the closest thing to traditional rankings in AI answers: how you’re positioned when multiple brands appear together. But instead of Position 1 vs. Position 2, it’s “better for X” vs. “better for Y.”
Amsive’s research found clear positioning hierarchies.

Kevin Indig’s Growth Memo research revealed a critical nuance. When AI positioned a brand as “best for startups” versus “best for enterprises,” users self-selected based on that framing, even if both brands technically served both segments.
The lesson: You’re not competing for position 1 anymore. You’re competing to own a specific positioning niche in AI’s mental model of your category.
We already covered the 38% overlap stat. The interesting question is why it dropped so fast. The answer: query fan-out.
When an AI Overview triggers, Google doesn’t just evaluate the top-ranking pages for the user’s actual query. It breaks the question into multiple sub-queries, retrieves relevant passages from across its index, and synthesizes them into a single response.
Your page might rank No. 1 for “best project management software” and still get skipped. The AI pulled from pages ranking for “project management for remote teams” or “integrations with Slack” instead. One query to the user. A dozen queries behind the scenes.
SE Ranking’s February 2026 research found that Google’s upgrade to Gemini 3 replaced approximately 42% of previously cited domains and generates 32% more sources per response than its predecessor. Traditional ranking positions became even less predictive overnight.
Semrush’s analysis of 17 months of clickstream data reveals an unexpected pattern: Over 20% of ChatGPT referral traffic goes to Google. That share rose from roughly 14% at the start of the study to more than 21% by early 2026.

The biggest beneficiary of ChatGPT’s growth is Google.
Users go to ChatGPT to get an answer, then head to Google to confirm findings or research brands they just discovered. For users, they’re complementary steps in a single journey.
Most ChatGPT prompts don’t match traditional search language. Between 65% and 85% of prompts couldn’t be matched to any traditional search keyword in Semrush’s database of 27 billion keywords.
That level of specificity doesn’t exist in keyword databases — and it’s becoming more common.
If position doesn’t matter the way it used to, what does?
Traditional rank trackers can’t measure these signals.
The 2026 measurement model requires parallel tracking. Traditional SEO metrics still matter for the portion of search that remains blue links. AI visibility requires tracking how often your brand appears and how it’s represented in AI-generated answers.
A new category of tools has emerged to support this shift.
None of these tools replace traditional SEO infrastructure. They supplement it.
Track, optimize, and win in Google and AI search from one platform.
The ranking obsession isn’t going away entirely. Traditional search still drives traffic. But measuring success solely through rankings misses the larger shift.
AI answer engines now act as gatekeepers, surfacing only the brands they consider citation-worthy.
Visibility depends on how often you’re included, how you’re described, and how you’re positioned relative to competitors.
Traditional rank trackers can’t capture that. It requires a different measurement model. That’s what determines visibility now.


Windows “K2” isn’t Windows 12, it’s better than that Microsoft has promised to make Windows 11 faster and more reliable. Over the years, goodwill towards Windows has eroded, and many users have begun actively seeking an alternative. Recently, Microsoft’s focus on Copilot and AI has accelerated this negativity to the point that Microsoft has started […]
The post How Microsoft’s “K2” project aims to fix Windows appeared first on OC3D.
Former ZeniMax Online Studios founder and studio head Matt Firor has finally spoken openly about Project Blackbird's cancellation. Firor, who was behind the successful release of The Elder Scrolls Online and its continued support for many years, left the studio following Microsoft's decision to cancel the project after many years of development. Shortly after that news, he admitted that the two things were directly related. Now, speaking to MinnMax, he provided a lot more color on why he feels the decision to shut down Project Blackbird is a missed opportunity for Xbox. It's conflicted. I'm so proud of what the […]
Read full article at https://wccftech.com/project-blackbird-cancellation-matt-firor-xbox/

As Samsung's unionized workers grow ever bolder, egged on by the rapidly fattening purse of its memory business, which has spurred calls for pay/bonus hikes from other divisions as well, Samsung is eyeing the nuclear option of definitively splitting up the conglomerate by spinning off its semiconductor-focused Device Solutions (DS) division. Irked by rising calls for pay/bonus hikes from its less profitable business units, Samsung is eyeing a spin-off of its very lucrative DS division into an entirely different company The top echelons of Samsung's management appear to be in a panic mode ahead of an impending workers' strike, so […]
Read full article at https://wccftech.com/strike-chaos-samsung-threatens-to-spin-off-its-semiconductor-focused-ds-division-into-a-new-company-to-neutralize-union-leverage/

Users can now mix and match DDR5 memory on the latest Intel platform to achieve better tuning without worrying about compatibility. ASUS Rolls Out BIOS 3002 and 3103 to Mix and Match Memory, Allowing Users to Optimize Different Spec "Green" Modules on Intel Z890 and B860 At a time when DDR5 RAM prices are at an all-time high, many are looking at the JEDEC industry standard RAM modules, which typically bring lower clocks out of the box compared to Intel XMP or AMD EXPO-enabled memory modules. These adhere to strict standards and operate at a particular frequency, timing, and voltage. […]
Read full article at https://wccftech.com/asus-aemp-iii-brings-support-for-mixed-ddr5-configurations-on-intel-z890-and-b860-motherboards/

Lisuan's 7G100 gaming graphics card will launch on 20th May, becoming China's first fully domestic 6nm GPU with WHQL certification. Lisuan 7G100 Gaming GPUs All Set For 20th May Launch In China, Brings Wider Game Support Through WHQL-Certified Drivers China's first domestically produced 6nm GPU for gaming audiences is launching on 20th May, bringing wider support, and the biggest of all, Microsoft WHQL certification for drivers, joining the ranks of Intel, NVIDIA, and AMD. This makes Lisuan Technologies the first Chinese GPU maker to achieve WHQL certification, marking a significant progress for domestic producers. Lisuan also states much wider game […]
Read full article at https://wccftech.com/lisuan-launches-7g100-china-gaming-graphics-card-on-20th-may-6nm-whql-certification/

Intel continues to see increased confidence for its upcoming Foundry technologies, such as 18A-P, 14A, and EMIB. Apple & Google Will Reportedly Leverage Intel Foundry 18A-P & EMIB Technologies, 14A Customers Also Lining Up. The Agentic AI and Inferencing boom has led to a significant surge in CPU demand. This has led major semiconductor companies such as TSMC to face severe supply constraints, all the while going on a large-scale expansion spree to meet demand. At the same time, Intel has been driving revenue up by selling off salvaged dies, but the company is also attracting the attention of various […]
Read full article at https://wccftech.com/intel-18a-p-pulls-in-apple-next-m-chips-emib-reportedly-wins-google-tpuv8e/

Adestto AI deploys 26 self-optimizing trading bots across Forex, gold, indices, and crypto. It provides real-time AI-verified signals and fully automated MT5 execution, with weekly AI tuning that adapts strategies to market conditions. You can manage risk with predefined profiles, automatic stop-loss, and news pauses, and control everything from a dashboard with Telegram alerts and built-in backtesting. Plans include VPS hosting and 24/7 disciplined trading.
Grocyy scans your grocery receipts, extracts products, prices, and store details, and organizes every purchase into a clean, searchable dashboard. It tracks spending by store and category, highlights trends, and helps you compare costs over time. Grocyy learns your buying habits to predict when you'll run out, estimate your next shopping date, and remind you before essentials run low. Use it to spot unnecessary purchases, control your budget, and save time without manual tracking.
Our vision for Gemini is to build an assistant that truly understands you—one that evolves with your needs rather than providing one-size-fits-all responses. Today, we’r… 


The March 2026 core update brought what Google describes as a design “to better surface relevant, satisfying content for searchers from all types of sites.” This confirms the simplest truth in search: people use Google to get answers.
Whether it’s solving a problem, learning something new, or making a decision, searchers want content that is genuinely helpful in their busy, on-the-go lives. If your content does that, it succeeds. If it doesn’t, no amount of SEO tricks, hacks, or magic bullets will get your content to show up on page one, let alone in AI Overviews.
AI Overviews went from appearing for just 6.49% of queries in January 2025 to 15.69% in November 2025 according to a Semrush study. Depending on the source today, AI Overviews appear for 25-50% of queries.
It’s clear that search engines and LLMs are working together more efficiently today than just a year ago. Fast forward another year, and we can only imagine.
For any SEO focused on creating helpful content and understanding user intent, it’s a truly exciting time to be in the industry. Your genuinely useful content can be surfaced in AI Overviews using retrieval-augmented generation (RAG) and query fan-out.
Entire papers have been written on these two concepts alone. The TL;DR is that SEO today is about more than just keywords or counting backlinks. Modern search is designed to connect searchers with content that actually answers their questions and satisfies user intent.
These systems, and those still being implemented (see Google’s blog on TurboQuant), are getting better at recognizing and dismissing thin, duplicate, or superficial content. Pieces that simply restate what someone else has already said online, lack originality, and fail to demonstrate legitimate real-life experience will continue to struggle to rank.
Depth, clarity, and expertise have always mattered, but SEOs who want to continue to succeed in 2026 and beyond are going to have to double down on these factors:
For many SEOs, this is a welcome shift. It’s not about just checking off boxes anymore.
Sure, we still have to do those things. But the bar for what constitutes good SEO is being raised far beyond the basics. When search engines evaluate content today, they’re looking for signals that SEOs and content creators are providing real value to searchers.
Small, local, or service-based businesses that rely on SEO-driven leads for revenue can use these same strategies, too. While success isn’t measured using the same metrics as it was just a couple of years ago, the result of good SEO remains: Get the business recommended before the competition for as many searches as possible.
Two years ago, this meant clicks. Today, it means visibility. AI platforms like ChatGPT, Gemini, and AI Overviews often recommend businesses without linking to websites directly, if at all.
A few tools have been developed to measure AI metrics, but these can get pricey, and as Elizabeth Rule said, “Measuring visibility is like trying to measure a wave with a ruler.”
This is why maintaining strong communication between stakeholders and the SEO team is so important. When success can’t be measured simply, a simple question of “how’s business going?” matters now more than ever. Beyond user intent, SEOs need to understand user behavior, mood, and temperament.
Here are five tips to get you started on creating content that is genuinely helpful:
Think beyond the initial query. What will readers ask next?
One of my favorite places to do research for this is the People Also Ask (PAA) section on SERP. For example, you’re writing about herniated disc treatment. Just Google “herniated disk treatment” and use the PAA feature to help you brainstorm more questions your audience may ask about the topic you’re writing. The more questions you click, the more ideas it’ll generate.
E-E-A-T is an SEO hill I will die on because it works. Share your knowledge, case studies, testimonials, or firsthand insights. This builds trust when done right and when you’re creating for people, not search engines.
This is what the helpful content update of 2022 was all about.
We’d all love to believe that everything we write is being read word-for-word. It’s not. People skim. They’re looking for an answer while they’re doing other things.
This is why clearly structured web pages are so important on both mobile and desktop. Use headings, bullet points, and concise paragraphs to help readers quickly find answers.
Authenticity sounds like a buzzword (and maybe it is), but people can tell when you’ve used AI to write something or when you’re just publishing content for SEO.
Much as it pains me (an English major who loves to read long novels and write dissertations) to say, no one cares about your personal anecdotes or how many adjectives you can think of for your “superior” service. They just need an answer to the question they searched.
Avoid fluff or filler. Real-world, practical content resonates better than generic advice.
If someone called and asked you, “How long does it take to change the water heater in my 1950s home?” You wouldn’t need 1,500 words to answer them. The content you create on the internet should be the same.
If you’ve been paying attention to GEO/AEO/SEO for AI, this might sound familiar to you as a little something called semantic triples. This sounds intimidating at first, but it’s really just sixth-grade English.
A semantic triple answers who, does what, for whom (or how). Remember diagramming sentences? It’s the relationship between the subject, predicate, and object. It can be any subject, predicate, and object:
I first heard about semantic triples from Mike King during SEO Week 2025 when he broke down his concept of relevance engineering. If you haven’t watched his video on this topic, I highly recommend it.
The basic idea is that SEO is about your audience:
A semantic triple answers these questions. It provides structure and clarity. It’s the “Who, What, and How” that Google told us about with the HCU documentation. It’s also genuinely valuable information for searchers.
Knowledge is your superpower. You’re the only person who can tell your story, explain your process, and show readers why your business or brand matters.
The most reliable SEO strategy remains the same with each new core update from Google: Create content that genuinely helps searchers.
Focus on the problems your audience is trying to solve, answer their questions fully, and share your expertise. Thin or derivative content won’t cut it in a world of AI-driven search and retrieval systems.
Google and AI platforms are trying to do the same thing searchers are doing: find the most helpful content. If you respond to that need, your content will rise to the top, no tricks, hacks, or shortcuts necessary.


Fact: Agentic AI is making humans indispensable.
More than 40% of agentic AI projects will be canceled by the end of 2027. That is a prediction from Gartner published in June 2025, based on a poll of more than 3,400 organizations actively investing in the technology.
The reason cited is not that the agents do not work. It is that the humans deploying them are making the wrong decisions. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” according to Anushree Verma, senior director analyst at Gartner.
Organizations are deploying agents without a clear strategy, without understanding the complexity, and without the governance to manage what happens when something goes wrong.
In other words, the agent is only as good as the human behind it.
This matters enormously for marketing. AI agents in marketing are real, accelerating and in many cases, necessary. Agents that select audiences. Agents that generate content. Agents that optimize send times, choose offers and orchestrate entire customer journeys autonomously, continuously and at a scale no human team could match. The capabilities are here today and growing rapidly.
But Gartner’s data reveals a warning and marketing leaders who miss it will find themselves on the wrong side of that 40%.
The failure rate Gartner describes is not random. It starts with fear.
Fear of being left behind. Fear of watching competitors move faster. Fear of being the CMO who did not act when everyone else did. That fear is driving organizations to deploy agentic AI, not because they have a strategy, but because they cannot afford to be last.
The result is agents built on broken workflows. Agents fed with poor data. Agents operating without the governance structures that keep them aligned with business goals. The agents execute… the wrong things, in the wrong ways, at the wrong times.
FOMO is not a strategy. And in the agentic era, it is an expensive mistake.
Gartner identified a widespread trend it calls “agent washing”… vendors rebranding existing chatbots and automation tools as agentic AI without delivering genuine autonomous capabilities. Of the thousands of vendors claiming agentic solutions, Gartner estimates only around 130 offer real agentic features. Marketing teams investing in the rest are not getting agents. They are getting dressed-up automation with an agentic price tag. automation with an agentic price tag.
The consequences go beyond wasted budget. Gartner predicts that in 2026, one-third of companies will harm customer experiences by deploying AI prematurely, eroding brand trust and damaging both acquisition and retention.
A personalization agent that misreads a customer. A content agent that violates compliance. A journey agent that floods a churning customer with offers at exactly the wrong moment. These are the predictable outcomes of deploying autonomous systems without the human judgment to direct them.
The dumbing down of marketers
Gartner’s third prediction is the most revealing of all. GenAI usage leads to the atrophy of critical thinking skills. As a result, 50 percent of global organizations will require AI-free competency evaluations.
Half of all organizations are watching their people get dumber because AI is always available to think for them. Quietly. Gradually. Until the day the algorithm is wrong and nobody in the room can tell.
In marketing, that is a crisis. Marketing requires judgment — the ability to ask not just what the data says, but what it means. Not just whether a campaign worked, but why. Not just whether to accept an AI recommendation, but whether it reflects the brand, the moment and the relationship the company is trying to build.
Those questions cannot be delegated to an agent. They require a human being scrutinizing what a machine thinks is right.
The most dangerous marketer in the agentic era is not the one who rejects AI. It is the one who accepts everything it produces without question.
An agent can optimize what it has been given. It cannot question whether it has been given the right thing.
It can personalize a message based on behavioral signals. It cannot decide that the right move is to say nothing at all… to give a customer space, to protect a relationship rather than extract from it.
It can generate a thousand content variations and test them. It cannot feel the difference between a message that converts and a message that connects. It cannot sense when a campaign that performs well in the data is quietly damaging the brand.
It can execute a journey flawlessly. It cannot design one that reflects what customers actually want from this brand, at this point in their lives.
These are not limitations that will be solved by the next model release. They are structural. AI is trained on the past. The irreducible human job in marketing is to bring judgment about what should happen next, even when the data does not yet exist to support it.
The right mental model for the agentic era is not human versus machine. It is a human plus machine, with the human in charge.
That is the foundation of Positionless Marketing. For decades, marketing teams operated as an assembly line with handoffs. Positionless Marketing breaks that model by giving marketers three transformative powers: Data Power to immediately discover customer insights for precise targeting and hyper-personalization, without waiting for engineers; Creative Power to create channel-ready assets like copy and visuals, without waiting for creatives; and Optimization Power to run campaigns that optimize themselves through automated journeys and testing, without waiting for analysts. Handoffs are eliminated.
The Positionless Marketer is a multidisciplinary thinker who deploys AI agents to go beyond traditional positions. Agents handle what used to require waiting for three different teams, eliminating the assembly line. The marketer is no longer waiting on anyone. They are thinking bigger, moving across disciplines while keeping human judgment at the center of every decision the agents make.
This is a promotion, not a replacement. But it comes with real demands. Marketers who can think strategically, not just operationally. Who can evaluate AI output critically, not just accept it. Who can take accountability for what the agents do in their name.
Gartner’s Daryl Plummer stated it directly: organizations should prioritize behavioral changes alongside technological changes as first-order priorities. The technology is ready. The question is whether the humans in the marketing organization are.
The organizations that will win the next decade of marketing are not the ones that deploy the most agents. They are the ones that build the human capability to direct them well.Gartner’s 40% prediction is not a warning to slow down. It is a warning to be deliberate. The difference between an agentic marketing operation that compounds value over time and one that wastes budget, violates policy, and erodes customer trust is not the technology. It is the human judgment sitting above it.
Marketing teams need to face facts in the agentic AI era: the agent is only as good as the indispensable human behind it.
A comparison of 5 AI search engines show they cite different sources but converge on citing brands, a key to SEO for AI search.
The post Comparison Of AI Citation Patterns Offers Strategic SEO Insights appeared first on Search Engine Journal.
A PlayStation 5 Linux Loader had been released Last month, a modder called Andy Nguyen, also known as “theflow0” and “TheOfficialFloW”, showcased their Linux-powered PlayStation 5 (PS5) and its PC gaming capabilities. Now, the modder has officially released a PS5 Linux Loader on GitHub, allowing others to turn their PlayStation 5 consoles into Linux PCs. […]
The post PS5 Linux project released to unlock PlayStation 5’s PC potential appeared first on OC3D.
The rumours are false, it’s “business as usual” over at Galax This morning, several reports have claimed that Galax is exiting the GPU market. These reports are false, and Palit has confirmed that its “business as usual” over at Galax and that the company will continue making GPUs. Galax has been part of the Palit […]
The post No, Galax is not exiting the GPU market appeared first on OC3D.
From near collapse to CPU dominance, we revisit 10 years of AMD Ryzen, benchmarking every flagship generation to see how performance, value, and architecture evolved.
One shouldn't expect any significant improvements even with higher VRAM capacity, and this was obvious since there are no other upgrades. Leaked Benchmarks Show Laptop RTX 5070 12 GB is Equivalent to 8 GB Version In Multiple Synthetic Tests GeForce RTX 5070 and RTX 5070-powered systems are enjoying huge popularity. To tackle the GPU shortages, NVIDIA has announced that it will now be supplying the RTX 5070 with 12 GB VRAM for the mobile platforms, which will be equipped with 3 GB GDDR7 memory modules. The existing RTX 5070 laptop GPU brings just 8 GB of GDDR7 video memory, unlike […]
Read full article at https://wccftech.com/nvidias-laptop-rtx-5070-12-gb-matches-tthe-8-gb-version-in-synthetic-tests/

The Final Fantasy VII Remake trilogy has been in the works for a very long time, and will finally conclude with the third and final game that has yet to be officially revealed. While no new information has been provided by Naoki Hamaguchi in a fresh interview with ComicBook focused on the upcoming Nintendo Switch 2 and Xbox Series X|S versions of Final Fantasy VII Rebirth, the trilogy director revealed the guiding philosophy behind the development of all three games that will lead to its conclusion being the culmination of the entire series. "Across the entire remake project, the guiding […]
Read full article at https://wccftech.com/final-fantasy-vii-remake-part-3-redefine-series-scale/

Over the last weekend, it was widely reported that Sony introduced changes to its PlayStation DRM policy, which now requires users to connect online every 30 days to continue playing every digital game purchased after March 2026. While the company has yet to provide an official clarification on the matter, detective work conducted by ResetERA forums member andshrew revealed how this new policy seems to be related to the 14-days refund window for digital purchases. Using a jailbroken PlayStation 4, the ResetERA user poked around behind the scenes, making some interesting findings, staring from how digital licenses work. "The PS4 will install a license file for all […]
Read full article at https://wccftech.com/playstation-drm-30-day-lock-vanishes-refund-window/

IssueCapture is a JavaScript widget you add to any website with one script tag. When users find a bug, they click a button, describe it, and optionally annotate a screenshot. The widget automatically captures console errors and failed network requests, then creates a detailed Jira ticket with all that context.
It works with both Jira Software and Jira Service Management, including team-managed projects. Optional AI features handle triage, categorization, and duplicate detection so developers get organized tickets instead of vague reports. The free tier includes 10 issues per month with no credit card needed.
Only EU is a curated directory that helps you replace US software and products with European alternatives. Browse categories like cloud storage, email, password managers, VPN, and more, or select tools you use to get tailored, GDPR-compliant recommendations. The site highlights providers with stricter environmental standards, shorter supply chains, and European quality, with clear details and links to explore each option.
GTA 6 is set for release this November, but as Rockstar Games is known for delaying its titles multiple times, the community remains wary of a potential last-minute slip into 2027. However, Take-Two CEO Strauss Zelnick suggested in a recent talk held during the iicon conference, as reported by IGN, that another delay is not on the horizon. "I think a lot of people will be calling in sick on November 19," Zelnick joked during his talk, clearly aware of how many in the community are planning to skip school or work to play one of the most anticipated games […]
Read full article at https://wccftech.com/gta-6-on-track-zelnick-brushes-off-delay-joke/

Yesterday was a big day for Rebel Wolves and their debut game, the open world action RPG The Blood of Dawnwalker. The Polish studio unveiled the game's release date (September 3) and the detailed PC system requirements. Moreover, in an aftershow Q&A, Creative Director Mateusz Tomaszkiewicz provided new information about the game. For example, he confirmed that full evil playthroughs will be possible, thanks to the ability to kill most NPCs without causing a game over. Evil run. Generally speaking, you can kill off the majority of NPCs. Maybe not every single one; there are specific cases where, for narrative […]
Read full article at https://wccftech.com/blood-of-dawnwalker-director-evil-playthroughs-bad-choices/

Revelion delivers autonomous AI-driven penetration testing designed for MSPs. Use a white-label client portal, full API, scheduled scans, and compliance framework mapping to send branded, enterprise-grade reports in hours. Control strategy, scope, and aggression, run fully autonomous or hybrid human-steered missions, and integrate with your RMM/PSA stack. UK-hosted and GDPR compliant, Revelion helps MSPs add recurring security revenue without extra headcount.
Happy Horse is an AI video creation platform that turns text prompts, images, and clips into cinematic videos. It preserves character identity across shots, offers director-level camera moves, and supports precise style control for photorealistic or stylized looks. You can generate native voice and singing with lip-sync or transform references with R2V workflows. Creators, marketers, educators, and filmmakers can quickly go from concept to export, with an API planned for embedding generation into other products.
Local CLI coding agent with deep Devin Cloud integration
Your full venture strategy, built in minutes.
PII guard for Claude Code to keep client data out of context

BeatCrate is a macOS DJ music library manager that helps you organize tracks, perfect metadata, verify audio quality, and prepare sets fast. It auto-tags from MusicBrainz, Discogs, iTunes, Beatport, and Traxsource with a merged view and inline diffs for confident edits. Analyze BPM with CoreML and DSP, inspect spectrograms, meter loudness, and search high-res artwork. Batch edit, filter with rule builders, and undo every change. Built with Swift and Metal, it supports MP3, FLAC, WAV, AIFF, and M4A on macOS 13+.
Spot signals, trigger outreach - turn posts into pipeline
Screenshot, record, annotate & edit video in one app
Keep AI-generated code healthy and maintainable
Notes, money, and health. Sorted.
Generate UI from your design system, not around it
Analytical skills for data agents running on Supabase
Everything users expect from modern chat. Out of the box.
Run your own Claude Code in your pocket.
Annotate any doc, URL, or folder - send feedback to agents
terminal based product manager
Open infrastructure for wearable-powered health products.
Free and open‑source, stop designing. Describe.
Picsart's power right from your AI chat box
Vibe-train evals and guardrails tailored to your use case
Ship data-driven apps without breaking flow

AI Fruit lets creators generate talking fruit shorts, self-eating meme clips, ASMR bite videos, and vegetable roleplay scenes for TikTok, Reels, and Shorts. Start with a character and format, then pick a model to produce 1080p videos or polished images. Templates and a credit-based studio help you move from idea to publishable content, and a story generator supports scripts and dialogue for recurring characters.
Human risk management for the AI era