X gets rid of its popular night mode in-app setting
According to Nikita Bier, Xβs head of product, users will now need to change their device settings to activate the feature.
Β
According to Nikita Bier, Xβs head of product, users will now need to change their device settings to activate the feature.
Β

According to multiple reports, the companyβs contractors in Kenya are allegedly reviewing usersβ private intimate content for training purposes.
Β
Since OpenAI began running ads four weeks ago, data tracked by Sensor Tower showed more than 100 individual brand promotions.
Β The company was pressured into this concession by the EU Commission and is allowing competitors access in an effort to prevent regulatory proceedings.
The company touted an uptick in construction jobs and infrastructure support, while also committing to the White Houseβs Ratepayer Protection Pledge.
Β
BounceBunny is an AI performance coach that creates personalized basketball training plans to help you jump higher, move faster, and excel on the court. Every plan is customized based on your body metrics, position, available equipment, and injury history. Experience the same quality training that NBA players use at a fraction of the cost. BounceBunny is trusted by over 100 college basketball teams, including UCLA, Cornell, and Northwestern. Train smarter, perform better, and stay injury-free.
NanoClaw's creator says Google ranks a fake website above his project's real site despite 18K GitHub stars, press coverage, and structured data setup.
The post NanoClaw Creator Loses SEO Battle To Impostor Website appeared first on Search Engine Journal.
Promotional performance in the app was significantly more effective than for traditional linear television advertising, according to new data from the company.
Β
ToolsFree.ai offers a collection of free online calculators, converters, and utility tools across finance, health, web development, unit conversion, text, and math. Use it without signup or limits on any device; every tool runs in your browser, so your data stays on your device. Explore mortgage and loan calculators, BMI and calorie calculators, QR and password generators, JSON formatting, and more, while following updates as new tools launch.
The Nothing Phone (4a) Pro is now teaching a lesson to those willing to learn: a masterclass in how to debut new tech, replete with genuine innovation rather than gimmicks and outright falsehood Γ la what Samsung just did at its Galaxy Unpacked event for the new S26 series. Samsung's disastrous Galaxy Unpacked event: Pre-leaks and falsehoods For the benefit of those who might not be aware, Samsung experienced a particularly egregious bout of channel leaks in the run up to its Galaxy Unpacked event, one that saw an unreleased Galaxy S26 Ultra fall into the hands of a tech [β¦]
Read full article at https://wccftech.com/nothing-phone-4a-pro-just-showed-samsungs-galaxy-s26-series-how-to-do-innovation/

The record-breaking Q1 2026 quarter saw Apple bring in a mammoth $143.756 billion, but this impressive figure was also accompanied by a statement made by CEO Tim Cook, hinting at which silicon would be found in the newly announced MacBook Neo. The A18 Pro found in the latter is still an insanely powerful chip, but the A19 Pro is on another level, and had it not been for the supply situation, weβd be getting a different set of specifications for the latest low-cost portable Mac. Supply constraints from TSMCβs end meant that Apple couldnβt secure sufficient A19 Pro shipments to [β¦]
Read full article at https://wccftech.com/tim-cook-statement-this-year-explains-why-macbook-neo-does-not-feature-a19-pro/

The Trump administration is exploring options to address AI chip exports, and initial reports suggest the proposed regulations are far more aggressive than the industry anticipated. The US Is Planning New AI Chip Export Regulations, By Looking at Compute Power Being Shipped Out The debate around AI chip exports has emerged several times since chip manufacturers like NVIDIA and AMD achieved significant compute breakthroughs. This matter was also under intense focus by the Biden administration, which introduced the "AI Diffusion" act that addresses AI chip exports by categorizing countries into different levels, each with its own caveats. The Diffusion Act [β¦]
Read full article at https://wccftech.com/the-us-could-soon-turn-nvidia-and-amds-ai-chips-into-a-foreign-policy-tool/

Gentlemen, we meet yet again. It's only been a week since the last time I wrote an article about Optiscaler, but development on the mod is happening at such a rapid pace that I once again have some updates to share with you. This time, it's AMD's Ray Regeneration that's getting the Optiscaler treatment. Thanks to the work done by DarkHelmet, you can now swap Nvidia's Ray Reconstruction for AMD's legally distinct Ray Regeneration denoiser. This is huge news, since currently there's a grand total of two titles with support for Ray Regen, and one of them isn't out till [β¦]
Read full article at https://wccftech.com/i-tested-amd-ray-regeneration-in-cyberpunk-thanks-to-optiscaler-its-surprisingly-good/

MSI's X870 Tomahawk WiFi drops to $240 on Amazon with WiFi 7, USB4, and dual Gen5 M.2 slots for AM5 builds.
Read full article at https://wccftech.com/msi-mag-x870-tomahawk-wifi-drops-to-240-on-amazon/

For this yearβs I/O Save the date, weβre showing how anyone can build incredible games with help from Gemini. Expect to hear more about Microsoftβs βnext generationβ Xbox at GDC 2026 Microsoftβs Asha Sharma, the CEO of Microsoft Gaming (Xbox), has unveiled Project Helix. Helix is the codename for Microsoftβs next-generation Xbox console, and Microsoft plans to discuss the system in depth at GDC 2026. Project Helix will be able to play βXbox and [β¦]
The post Xbox unveils βProject Helixβ and it can play βXbox and PC gamesβ appeared first on OC3D.
Track My Visibility monitors how your brand appears in AI-generated answers across ChatGPT, Claude, Perplexity, and Gemini. It runs prompts at scale, measures mentions, citations, position, and sentiment, and shows which pages drive visibility. You can compare against competitors, audit query coverage, and view URL-level citation paths. A dashboard displays model-wise coverage, share of voice, and AI-ready scores, while actionable recommendations help you address gaps and increase AI citations.
Asha Sharma, the recently installed chief executive officer of Microsoft Gaming and the new head of Xbox following Phil Spencer's retirement, has just teased the next-generation Xbox console in a post on her personal X (formerly Twitter) account, which we now know is codenamed Project Helix. Sharma shared the codename and what appears to be a new look for the Xbox logo, while also teasing that this new console will "lead in performance and play your Xbox and PC games," confirming reports that the next generation of consoles from Xbox will bea hybrid between a PC and console, capable of [β¦]
Read full article at https://wccftech.com/asha-sharma-teases-next-gen-xbox-project-helix-pc-console-hybrid/

The launch of the 13-inch and 15-inch M5 MacBook AirΒ is excellent news for those wanting a jaw-dropping deal on a portable Mac because Appleβs older-generation M4 MacBook Air has dropped by $300 on Amazon. Of course, it should be mentioned that the extensive price cuts have been observed on a few 15-inch models, but thatβs still an attractive deal because we donβt remember the last time that such discounts were introduced. Unfortunately, the stock is slowly dwindling, and if you donβt act fast, youβll be out of luck and a cheaper M4 MacBook Air. If a chipset upgrade and base [β¦]
Read full article at https://wccftech.com/select-m4-macbook-air-models-300-off-on-amazon/

Over the past few months, Iβve been shopping around for a first home, and one of the requisites was to have a dedicated office space. Fast forward to November 1st of this year, and my wife and I signed the final documents to get the keys to our new home. Autonomous actually reached out to us much earlier in the year and sent samples of both their Desk 5 Pro and ErgoChair Ultra (in black and white to match the Wccftech logo), but remained sealed in a box until I finally had the space to install a brand new desk. [β¦]
Read full article at https://wccftech.com/review/autonomous-desk-5-pro-and-ergochair-ultra-review-clean-and-professional/

Embark Studios' latest update for ARC Raiders isn't your bog-standard fixes or new content release. Instead, it's an update that the team rushed out the door this morning after a report from tech blogger and systems engineer Timothy Meadows pointed to an incident where two ARC Raiders players' private Discord DMs (direct messages) appeared in a game log file. Per Meadows report, the game's Discord SDK captured private messages between two users and a Discord Bearer token. It was a massive over-extension of the data that Embark collects through ARC Raiders' Discord SDK, an issue that is thankfully now fixed [β¦]
Read full article at https://wccftech.com/arc-raiders-latest-update-fixes-massive-security-flaw-private-discord-dms-collected-by-embark/


Google AI Max drives revenue but at a higher cost, according to Smarter Ecommerceβs Mike Ryan, who analyzed 250+ campaigns. Outcomes vary, and much more testing is still needed.
Why we care. AI Max isnβt a minor update. Itβs Googleβs most significant reimagining of Search campaigns in years, shifting away from keyword syntax toward pure intent matching. For you, thatβs both an opportunity (possible growth) and a risk (an efficiency tradeoff).
By the numbers. The result of the analysis:

Advertisers who activate AI Max typically see 14% more conversions or conversion value at a similar CPA or ROAS, rising to 27% for campaigns still relying on exact and phrase match keywords, Google says.
Turning on AI Max is essentially a coin toss: you may see a lift, but efficiency likely wonβt follow, Ryan concluded
What AI Max actually is. Rather than forcing Search campaigns into Performance Max, Google went the other direction β bringing PMax-style automation into classic Search. The result is three core features:

Four pitfalls Smarter Ecommerce identified:
Between the lines. Googleβs 14% uplift stat conspicuously excludes retail β an omission Ryan flags as significant for ecommerce advertisers. Thereβs also a deeper irony: youβre most likely to adopt AI Max if youβre already running Broad Match, DSA, and PMax β yet Google says those accounts will see the lowest incremental benefit.
Whatβs next. In a conversation with Ryan, Google Ads Liaison Ginny Marvin confirmed that Google plans to deprecate Dynamic Search Ads and migrate the technology into AI Max for Search. No firm timeline was given, though past Google deprecations often run about a year from announcement.

Ryan recommends activating AI Maxβs keywordless features in your existing Search campaigns now and beginning to wind down DSA β not migrating it to PMax.
Ryanβs verdict is cautious optimism. About 16% of advertisers are testing AI Max, and few have gone all in. Start small, audit aggressively, and donβt let FOMO around AI Overviews drive your decision.

The report. The Ultimate Guide to AI Max for Google Search
Lightning Assist accelerates your typing with instant text expansion, AI-powered commands, push-to-talk transcription, and smart hotkeys on Windows, macOS, and Linux. You can create resources and terminal resources, assign custom hotkeys, and insert code snippets, templates, or commands in any app. Use AI Enhance to polish text and AI Speech to dictate in real time, all backed by enterprise-grade security. Set it up in minutes and reclaim hours every day.
New SMEC study analyzes AI Max in Google Ads Search campaigns, showing a 13% conversion value lift but higher CPA and unpredictable ROAS results.
The post What SMECβs Data Reveals About AI Max Performance appeared first on Search Engine Journal.
Grand Theft Auto developer Rockstar Games' parent company, Take-Two Interactive, is also the owner of 2K, the massive publisher and developer behind several titles, though most notably the annual basketball series, NBA 2K. Today, Rockstar announced that it would be combining the two franchises, with players subscribed to GTA+, which grants special bonuses in-game Grand Theft Auto V, getting access to NBA 2K26 on PS5 and Xbox Series X/S consoles for a limited time starting next week. The latest installment in the annualized basketball franchise will be accessible to GTA+ subscribers on March 10, and the full game, on top [β¦]
Read full article at https://wccftech.com/rockstar-expands-gta-subscription-nba-2k26-ps5-xbox-series/

NVIDIA's ambitions for China are glooming down with each day, as a new report indicates the AI giant is now looking to scale back H200 production in favour of ramping up Vera Rubin production. NVIDIA Plans to Shift H200 Production Towards Vera Rubin, as it Prefers 'Consistency' Over Revenue We have reported extensively on the NVIDIA-China saga in the past as well, and one of the more common trends in these stories is that both NVIDIA and China seem to be running in cycles, trying to catch each other. We'll discuss this aspect further ahead, but for now, according to [β¦]
Read full article at https://wccftech.com/china-catch-22-is-pushing-nvidia-to-the-brink/

Learn more about AI Mode in Searchβs query fan-out method for visual search.
This new unboxing video from Google shows off the Pixel 10aβs sleek, durable design and brand new Berry color. 
Has OpenAIβs increasing independence from Microsoft and, by extension, Bing, become an overly dependent relationship with Google?
Our study comparing shopping query fan-outs (QFOs) in ChatGPT from both Google and Bing carousels appears to have provided at least a partial answer to that question. Letβs take a look at how this study was conceived and what we found.
In November 2025, a few researchers in the AI research space, including myself, detected a mysterious field in ChatGPTβs source code: id_to_token_map. But what that field revealed when decoded was even more intriguing.
This field is whatβs called base64 encoded, but when we decoded it, it revealed what looked to be Google Shopping parameters, such as productid, and offerid, but also language/locale parameters. Even more interesting? This field revealed a query used to look up that particular product.Β
To categorically prove this was indeed a Google Shopping link, we would have to be able to reconstruct the shopping URL solely from the extracted parameters.Β
Letβs look at an example of what this looks like using the ChatGPT product carousel for the prompt βbest smartphones under $500.β

If we decode the relevant field, we can recreate the Google Shopping link from the extracted parameters.
The big question was: Would this link correspond to the exact product in the ChatGPT product carousel? So we tried it:

It turns out that, in fact, yes it does!
But this decoding technique alone doesnβt answer any of these important questions:
Using Peec AI data, the following study aimed to robustly prove once and for all that ChatGPT does indeed mainly source from Google Shopping.Β
To do this we analyzed more than 40,000 carousel products and 200,000 organic products from each Google and Bing. By comparing the similarity of the products, we got a very clear picture of what was really happening behind the scenes. Letβs dig into our findings.
To answer whether shopping query fan-outs are different from normal search query fan-outs, we analyzed 1.1M shopping query fan-outs from Peec AI data and compared them to the normal search query fan-outs for the same user prompt. We found that they are almost always different:
| Shopping QFO unique to user prompt | 99.70% |
| Shopping QFO unique to normal query search fan-out | 98.31% |
To dive deeper, we explored the average word counts of both of these query fan-out types by calendar week.Β
The chart below clearly shows that normal fan-outs are significantly longer β 12 vs. seven words. That makes sense since search query fan-outs are used to retrieve contextual information. This means they need to be long enough to retrieve web results that are specific to the user prompt. Vector search (or comparing embeddings) works best with more context.Β
Shopping fan-outs, on the other hand, typically target a specific shopping results page and therefore do not need to be as long. It appears the main goal is to retrieve products based on the shopping fan-out. Rather than compare chunks of text, the data in this study supports the hypothesis that ChatGPT relies heavily on Google organic shopping results to populate its carousel.

Further evidence of the distinct nature of the shopping fan-outs surfaces when we look at how many are used per prompt. On average, 2.4 search fan-outs are used per prompt vs. just 1.16 for shopping fan-outs. For reasons similar to above, retrieving more contextual information often requires more search fan-outs vs. simply retrieving products. To populate an eight product carousel in ChatGPT, it seems that, for the most part, one page of Google Shopping results is enough.

To answer this question in the fairest possible way, we extracted around 5,000 ChatGPT carousels comprising 43,000 products from the Peec AI dataset. Prompts were chosen to be as diverse as possible (see Methodology for the creation process).
We then extracted the organic shopping pages and retrieved the top 40 organic products for both Google and Bing shopping results. Paid ads and sponsored products were excluded from the analysis.Β
We used a three-step matching algorithm (see Methodology for exact details) to attain a similarity score between the ChatGPT product title and the title found in organic shopping results. This is because not only is ChatGPT probabilistic, but so is, to a certain extent, Google Shopping. Product titles can be rewritten with or without certain product features and results are very sensitive to the exact proxy location where the results are retrieved.Β
We counted a product as matching if it reached a threshold of 0.8 or above, effectively, if it was the same brand and product name and exhibited a very high degree of similarity.
The results are summarized in the chart below.

Impressively, across 43,000 highly diverse ChatGPT carousel products, 45.8% were found to have an exact title match in the corresponding Google top 40 organic shopping products for that exact shopping fan-out.Β
For Bing, this exact match rate was just 0.48%.Β
If we simply look at the percentage of strong product matches across all eight ChatGPT carousel positions, over 83% were found in the Google top 40 products, but that number drops to just under 11% for products found on Bing. This is very strong evidence that ChatGPT sources its carousel products from organic Google Shopping results.
We also see a very high number of weak matches in Bing at over 62%. This implies that the top 40 returned products for each shopping fan-out differ significantly across Google and Bing. This makes sense as there are many 1000s of possible combinations of brand and product that can be surfaced in shopping results.Β

Even if Bing found around 11% of ChatGPT carousel products, how many of those products were only found by Bing? Across the 43,000 carousel products Bing only found 70 that were not found in Google Shopping, constituting just 0.16%. This means that in almost every case there was a match in Bing there was also a match in Google.Β
It seems unlikely, then, that ChatGPT is also sourcing products from Bing Shopping in the vast majority of cases.

Here we explore the most common positions (mean and median shown) of Google shopping product positions for each ChatGPT carousel position:

For example, for the first carousel position we can see that the average Google Shopping position is around five. Note that we see a sloping trendline for the carousel positions that correspond to higher Google Shopping positions. This implies that ChatGPT sources top carousel products from higher Google Shopping positions.Β
Plotted another way, we can visualize the cumulative number of strong matches across organic Google Shopping positions. This chart allows us to see that 60% of the strong product matches are found in the top 10 Google shopping results alone.Β

Comparing the top 20 vs. positions 21-40, ChatGPTβs favoritism for higher positions becomes clear, with an overwhelming majority of matches (almost 84%) coming from the top 20:

Finally, we explored whether the prompt being branded vs. non-branded made a difference to the product matching results.
The results show a similar high level of product matching for both branded and non-branded prompts, with only slightly higher match rates for non-branded:

This study analyzed over 43,000 ChatGPT carousel products across 10 industry verticals and compared them against 200,000+ organic shopping results from both Google and Bing. The findings painted a clear picture.
Over 83% of ChatGPT carousel products were found as strong matches in Googleβs top 40 organic shopping results. For Bing, that figure was just 11%, and of those, only 70 products across the entire dataset (0.16%) were found exclusively in Bing. In almost every case where Bing returned a match, Google had already returned the same product.
The data strongly supports this. Shopping query fan-outs are distinct from normal search fan-outs 98.3% of the time. They are significantly shorter (seven vs. 12 words), and ChatGPT uses far fewer of them per prompt (1.16 vs. 2.4 words). This makes sense; populating a product carousel is a fundamentally different task from gathering contextual information to construct a written answer. One is about retrieving structured product listings from a shopping index while the other is meant to retrieve web pages rich enough in context for vector search and re-ranking to work effectively.
The data shows a clear positional bias, with 60% of strong matches coming from the top 10 Google Shopping results and nearly 84% from the top 20. ChatGPT carousel position correlates with Google Shopping rank, meaning products that rank higher in Google Shopping are more likely to appear earlier in the ChatGPT carousel.
Since these patterns hold across branded and non-branded prompts, and across all 10 verticals tested, this reinforces that this is a systematic architectural behavior rather than a category-specific or query-specific artifact.
For brands and retailers, the implication is straightforward: Your Google Shopping ranking strongly influences whether your products make it into ChatGPTβs carousel. These findings indicate that the selection set of carousel products in many cases is effectively the top 40 organic Google Shopping positions for the corresponding shopping fan-out query.
But while product ranking in Google Shopping plays a role, it doesnβt tell the full story. It is likely that other factors, such as overall product mentions and sentiment in the context sources retrieved, also factor into the final ChatGPT carousel selection and ranking.Β
Understanding the full picture in terms of how your products are perceived across relevant sources, as well as how you show up on Google Shopping, could be the key to understanding ChatGPT product carousels.
For the AI research community, this study provides robust, large-scale evidence that ChatGPTβs product carousel operates as an independent retrieval pipeline for the selection set of products, separate from the contextual web search that powers the written portion of its responses. It is possible, and even likely, that for the final selection and ranking of products, ChatGPT uses contextual clues such as product sentiment from the sources retrieved by the normal search fan-outs.
As always, this represents a snapshot of current behavior. OpenAI could change its retrieval sources or methods at any time, but this behavior has been consistent in our findings for at least the last four months.Β
Measure how much product overlap there is between ChatGPT Shopping (via product carousels) and Google Shopping organic results for the same queries, across 10 industry verticals. This was contrasted to Bing shopping results as a control using an identical pipeline.
Specifically, the study evaluated:
Prompts were created with the purpose of triggering ChatGPT carousels. To maximize diversity, a mixture of branded and non-branded prompts were used, as well as prompts that explicitly included a price and ones that did not.
Additionally, a diverse selection of verticals were chosen to make the findings more robust. These were: Apparel & Footwear, Baby & Kids, Beauty & Personal Care, Electronics, Home Improvement, Home & Kitchen, Office Supplies, Pet Supplies, Sports & Outdoors, Toys & Games.
The product matching algorithm compared ChatGPT product titles against the top 40 Google Shopping titles using a three-stage cascade approach
The goal was to find the best match between a ChatGPT product title and the corresponding Google Shopping titles. A match was determined using a cascade of three stages:
This approach was set to be fairly conservative, and 0.8 was determined as a reasonable threshold for a product match as this often corresponds very closely to the same brand and product.Β
Real examples of matching thresholds from the data:
| Match threshold | Description | ChatGPT product | Google Shopping | Differences observed |
| 1.0 | Exact string match, no differences | Hot Wheels RC 1:64 Mustang GTD | Hot Wheels RC 1:64 Mustang GTD | None |
| 0.95 | Near exact, minor differences such as hyphen, punctuation only | Learning Resources Snap-n-Learn Matching Dinos | Learning Resources SnapβnβLearn Matching Dinos | The hyphen character is different in unicode |
| 0.9 | Same brand and product, additional non-crucial words allowed | Block Tech 250 Piece Set | Block Tech 250 Piece Building Blocks Set | βBuildingβ added to blocks, but product and brand are the same |
| .85 | Same product and brand, potentially slightly different word order and additional, non-crucial words | LEGO Japanese Red Maple Bonsai Tree | Japanese Red Maple Bonsai Tree LEGO Botanicals | Different word order and one additional word βBotanicals,β same product and brand |
| .8 good match threshold Same brand, same product | Same brand and product, possibly additional descriptors | Cards Game Against FRIENDS β Limited Edition | Cards Game Against FRIENDS β Limited Edition β Party Card Games For Adults | Same brand and product with additional descriptors that donβt affect the match |
| .75 | Same brand and product line, very minor product differences such as size or dimensions | My Sweet Love 14-inch My Cuddly Baby Doll | My Sweet Love 8-Inch MinWeBaby Doll | Same brand and product line but different size dimension |
| .7 | Same brand, often slightly different product, but within same category | Adventure Force Ram Truck RC Car | Adventure Force McLaren 765LT RC Car | Same brand and product category but different individual product |
| .65 | Same brand, often slightly different product but within same category | Mattel 300βPiece Puzzle | Mattel 80th Anniversary Puzzle | Same brand and product category but different individual product |
| .6 | Typically same product category, but often different brand and product line | Tell Me Without Telling Me Party Card Game | Elimino! Card Game | Different brand and product line, the same overall category of βcard gameβ |
| .55 | Similar product category but usually not either different brand and/or different product | Furby Interactive Plush Toy Interactive Digital Pet Toy | Interactive Digital Pet Toy | Different brand, similar product category but different specific product |
Nvidia reportedly targets βaround Computexβ launch for its 9GB RTX 5050 Benchlife.info claims to have confirmed yesterdayβs reports of Nvidiaβs planned RTX 5050 9GB graphics card, and has βconfirmedβ a release date for the GPU around Computex 2026. Nvidiaβs reportedly planning to upgrade its RTX 5050 graphics card to 3GB 28 Gbps GDDR7 memory modules, [β¦]
The post Nvidia targets RTX 5050 9GB launch around Computex appeared first on OC3D.
After layoffs at Skate developer Full Circle just last week, the remainder of the team at EA's studio continues to work on updating the game with new content, the latest of which is Season 3: Fluid Flashback, which begins on March 10, 2026, and promises to bring skaters "back to skateboarding's first major era," which it describes as a time when "polyurethane wheels replaced clay and metal ones," and when the sport began to grow and evolve rapidly. That means elements of San Vansterdam will be taken back to the 1970s, "with vibrant colors and bold designs." Areas like Rolling [β¦]
Read full article at https://wccftech.com/ea-skate-season-3-fluid-flashback-revealed/

This afternoon, independent Polish developer Reikon Games announced that RUINER 2 is in development for PC. As suggested by the title, it will be a sequel to 2017's twin-stick shooter, which was well-received by fans and critics. RUINER earned an 8 out of 10 score on Wccftech: RUINER is a no-brainer if you are interested in fast-paced action games that require real skill to truly perfect. However, RUINER 2 will be greatly expanded: it will support co-op gameplay and shift its genre to "cyberpunk action RPG". Marek Roefler, Game Director of the sequel at Reikon, said in a statement: We [β¦]
Read full article at https://wccftech.com/ruiner-2-announced-pc-co-op-cyberpunk-action-rpg/

Epic Games is officially suing former contractor AdiraFNInfo, marking a significant escalation in the industry-wide hard stance against leakers. Following Activision's recent shutdown of a Call of Duty leaker, Epicβs legal action confirms that gaming companies have had enough of confidential IP and trade secrets being shared ahead of official announcements. As confirmed today with a message on X, Epic Games "took legal action against a former contractor who repeatedly leaked confidential partner IP and trade secrets that they received while working with Epic." The publisher absolutely does "not allow this and will continue to take action when Epic team [β¦]
Read full article at https://wccftech.com/epic-games-sues-former-contractor-and-known-fortnite-leaker-adirafninfo/

Assassin's Creed Unity has received a 60 FPS patch today for PlayStation 5, Xbox Series X, and Xbox Series S, which may have packed an undocumented bump to 4K resolution. While a previous-generation title, this update makes the 2014 classic relevant again for those wishing to experience the series' old, more straightforward gameplay formula. According to multiple online reports, today's Assassin's Creed Unity patch increases resolution up to 4K. While no one has made an actual pixel count, the game's PlayStation 5 update history mentions 4K resolution and 60 FPS gameplay. Some users, such as ResetERA's RayCharlizard, shared screenshots captured [β¦]
Read full article at https://wccftech.com/assassins-creed-unity-ps5-xsx-patch-4k/

Yesterday, the official X (formerly Twitter) account for the United States White House published a video promoting its ongoing strikes on Iran, which contained footage and UI elements from Microsoft and Activision's popular shooter franchise, Call of Duty. The United States and Israel launched strikes against Iran this past weekend, which recent reports estimate have resulted in 1,230 casualties. Amidst the responses to the video, which began with Call of Duty footage followed by real-life footage of the strikes, Chance Glasco, one of the original founders of Infinity Ward and developer on Call of Duty said that the video was [β¦]
Read full article at https://wccftech.com/call-of-duty-co-founder-says-activision-wanted-cod-about-iran-attacking-israel/


For 20 years, the web has run on a simple trade: publish content that meets a personβs needs, rank in search, earn traffic, then monetize that traffic through products, services, affiliate referrals, or ads.
Zero-click answers and AI search are rewriting that relationship. The new question is whether AI will cite you as a source β and whether that visibility can turn into revenue.
To understand who gets included and who gets routed around, I ran over 200 AI visibility audits across 10 industries.
The pattern was consistent: Most sites are easy to parse, but hard to justify citing. And the industries that rely on discovery traffic the most are often the ones making themselves the hardest to access.
I ran 201 audits using the same rubric and captured an overall AI visibility score, plus four subscores:Β
The dataset included 201 audits across 10 industries:
Note that there was a page type skew β the sample is homepage-heavy (131 homepages, 13 articles, with the remainder a mix of pages). That matters because homepages tend to be marketing-heavy and evidence-light.
I also tracked access failures because βerrorβ results are part of the story. 38 of the 201 audits (18.9%) returned an error, meaning the agent was likely blocked or couldnβt reliably access the content.
An additional eight audits were technically processed but scored 0 due to missing subscores, consistent with partial extraction or app-style rendering that yields little accessible content.
When I summarized score distributions, I focused on the successfully processed audits (163 sites), so βcannot accessβ didnβt get mixed with βlow quality.β I treated error rate by industry as its own signal because it indicated whether AI systems could reliably use a site as a source.
The table below shows how the industries in the dataset performed in the audits.
| Rank | Industry | Error rate | Median overall | Median authority | Median extractability | At risk |
| 1 | Travel booking and trip planning | 33.3% | 45.5 | 31.0 | 52.0 | High |
| 2 | Job boards and career marketplaces | 40.0% | 64.0 | 44.0 | 74.0 | High |
| 3 | Legal directories and lead gen | 35.0% | 63.0 | 44.0 | 74.0 | High |
| 4 | Coupons and deals | 20.0% | 62.0 | 36.0 | 74.0 | High |
| 5 | Local directories and lead gen | 5.3% | 64.0 | 38.0 | 74.0 | Medium |
| 6 | Online courses and learning marketplaces | 30.0% | 67.5 | 46.5 | 80.0 | Medium |
| 7 | Health info and symptom lookups | 15.0% | 69.0 | 52.0 | 80.0 | Low |
| 8 | Personal finance comparison | 5.0% | 67.0 | 52.0 | 78.0 | Low |
| 9 | Affiliate product reviews | 0.0% | 69.5 | 54.0 | 74.0 | Low |
| 10 | Recipes and cooking content | 5.0% | 75.0 | 55.5 | 81.5 | Low |
The findings show that most websites arenβt built to be cited consistently. Here are the three numbers that matter.
38 of 201 sites (18.9%) returned an error. In some categories, it was far worse: job boards (40%), legal directories (35%), travel booking (33%), and course marketplaces (30%). In those spaces, a third to nearly half of the market is effectively AI-dark by default.
Legal directories had the highest AI blocking of any industry.
Across the 163 processed audits:
Translation: Most brands arenβt built to be reliably used and cited.
Median subscores across processed audits:
Most pages are easy to parse. Far fewer are easy to justify citing. Two repeated findings explain why:
That should change how you think about risk. More than losing traffic, the bigger threat is being removed from the consideration set.
Dig deeper: What 4 AI search experiments reveal about attribution and buying decisions
Industries disappear for three reasons. You can think of them as three failure modes.
If agents canβt consistently access your content, the model has less to work with and will either route around you or fill in the gaps from other sources.
What access failure looks like:
Why this causes vanishing:
Trust failure is quieter. The agent can access your page, parse it, and summarize it, but the page doesnβt provide enough proof for the model to confidently cite it as a source.
This was the dominant pattern in the completed audits. In plain language: Your content is readable, but it isnβt defensible.
The clearest proof of this showed up when I compared page types:
A polished homepage isnβt proof. If you want to be cited for anything beyond your brand name, a typical homepage alone isnβt enough. Evidence usually lives in articles, explainers, data pages, policy pages, and methodology pages.
Utility failure is the most painful. You might get included. You might get cited. But if your value is only information, AI can compress it into an answer, and the user never needs to visit your site.
Visibility determines whether you appear in the conversation. Utility determines whether appearing turns into revenue.
A practical way to think about it:
Access failure gets you excluded. Trust failure gets you skipped. Utility failure gets you summarized.
Once access, trust, and utility get viewed together, the vulnerable industries stop looking random.
The categories that repeatedly showed high risk in my dataset share three traits:
Thatβs why travel booking, job boards, legal directories, and coupon sites clustered as the most exposed categories in this dataset.
The bigger takeaway? Your website can be built in a way that invites exclusion, even if your business is healthy.
Dig deeper: Why every AI search study tells a different story
Some industries will feel this harder than others. A site funded primarily by high-volume informational traffic is more exposed to zero-click behavior. But even in those categories, the path forward is to stop selling information alone.Β
The big mistake right now is treating AI search like a ranking update, when itβs an economic update. The audits made two things obvious:
The threat is invisibility. You donβt win by hiding. You win by becoming cite-worthy and by building something the user still needs after the answer is delivered.
Trust plus utility is the new moat. Anything else is just playing from yesterdayβs playbook.

Sony reportedly halts PC porting efforts for single-player PlayStation 5 games Recent years have seen Sony bring more and more of its classic PlayStation titles to PC, to the point that it created its PlayStation PC publishing unit in 2021 to support these efforts. Now, it looks like Sony has fallen out of love with [β¦]
The post Sony shifts PC strategy towards console exclusivity appeared first on OC3D.
NVIDIA's CEO has talked about the 'agentic AI' inflection point at the Morgan Stanley conference, and he has called out OpenClaw as the "most important" software release of our times. NVIDIA's CEO Says that Agentic AI Has Brought Uses 1,000x Higher Tokens, Bringing In Immense Compute Demand Jensen has talked about AI being a "5-layer cake", and one of the more interesting layers that yields the most returns to hyperscalers and frontier labs is the applications layer. OpenClaw and AI agents are examples of how AI, when placed in a hyper-personalized environment, yields results that replicate human workloads. NVIDIA's CEO [β¦]
Read full article at https://wccftech.com/nvidia-ceo-says-openclaw-did-in-3-weeks-what-linux-took-30-years/

VR developer and publisher nDreams, the studio behind VR titles like Reach, Vendetta Forever, Synapse, Ghostbusters: Rise of the Ghost Lord, and more, have just announced "a significant reduction in overall staffing levels," resulting in the closure of two of its internal studios, Near Light and Compass studios. Confirmed in a statement on the company's LinkedIn page, the layoffs will impact studios across nDreams' suite of teams as the company gets restructured to put nDreams Elevation at its core, though only the aforementioned Near Light and Compass will be shuttered. Between the studios, 78 developers are impacted by the closures. [β¦]
Read full article at https://wccftech.com/vr-studio-ndreams-announces-significant-layoffs-shuts-down-two-internal-studios/

Apple has finally debuted its latest chronically hyped up budget offering, dubbed the MacBook Neo, replete with specs that barely qualify for 2016, let alone 2026. Of course, budget offerings almost always cut corners in some way or the other. But how do you justify two USB-C ports with wildly different characteristics and no way of knowing which is which until you actually plug in your peripheral? What about a heavily binned SoC, a hobbled trackpad, and pricing tiers that make an M3 MacBook Air appear like a godsend? Apple seems to have designed the MacBook Neo to specifically cater [β¦]
Read full article at https://wccftech.com/apple-just-debuted-a-glorified-piece-of-e-junk-and-called-it-macbook-neo/

A company that prides itself on its stringent focus on detail can often make head-scratching blunders that youβd never expect it to. On this occasion, one eagle-eyed individual caught Apple comparing a 15-inch M1 MacBook Air, a product that has never existed in its lineup, to the M5, which currently powers the technology giantβs latest portable Mac. Fortunately, the technology giant rectified the mistake, but not before someone posted it on social media for millions to see. Larger 15-inch MacBook Air models were added to the lineup after the M2βs release To remind you, Appleβs M1, which launched back in [β¦]
Read full article at https://wccftech.com/apple-accidentally-mentioned-non-existent-15-inch-m1-macbook-air-in-m5-comparison/

To combat the current GPU shortages due to higher VRAM prices, NVIDIA is reportedly bringing back its popular RTX 30 series budget GPU. Five-Year-Old GPU to Return to Shelves; AIBs Will Reportedly Start Getting the GPUs Between March 10 and March 20 It's remarkable how we are going back to older hardware, but it's one of the only measures left for hardware manufacturers to maintain a steady supply and meet the demand. Turing architecture-based GPU, GeForce RTX 3060, is about to make a comeback, as we reported recently. RTX 3060 has been one of the most popular graphics cards in [β¦]
Read full article at https://wccftech.com/nvidia-geforce-rtx-3060-expected-to-make-a-comeback-in-mid-march/

Today, California-based independent developer Ember Lab announced that Kena: Bridge of Spirits will launch on Nintendo Switch 2 this Spring. Kena: Bridge of Spirits first launched on PlayStation platforms and PC in September 2021 and went on to win several awards, including Best Independent Game and Best Debut Indie Game at The Game Awards 2021. On Wccftech, the game earned an 8 out of 10 score from reviewer Francesco De Meo, who wrote at the time: Despite featuring a very familiar experience inspired by The Legend of Zelda series, Kena: Bridge of Spirits manages to stand out from the competition [β¦]
Read full article at https://wccftech.com/kena-bridge-spirits-nintendo-switch-2-spring-2026/

After celebrating its sixth anniversary last month, NVIDIA has prepared another large lineup of PC games for the GeForce NOW cloud platform in March. Throughout the month, NVIDIA is planning to add fifteen games to the library, starting with the following eight this week: The most interesting new releases are landing later this month, though, with Crimson Desert from Pearl Abyss leading the pack. The game will support NVIDIA DLSS Super Resolution for GeForce NOW users with the Premium tier and also NVIDIA DLSS Multi Frame Generation and Ray Reconstruction for GeForce NOW users with the Ultimate tier. Weirdly enough, [β¦]
Read full article at https://wccftech.com/geforce-now-march-2026-lineup-crimson-desert-15-games/

The "upgraded" GeForce RTX 5050 is expected to arrive in June this year at the Computex event. NVIDIA Will Reportedly Debut its 9 GB RTX 5050 GPU at Computex; Same GPU, but Newer GDDR7 Memory With an Additional Gigabyte of VRAM Instead of increasing the VRAM to 12 GB, NVIDIA is straight up reducing RTX 5050's memory bus width to offer 9 GB VRAM capacity. The original version comes with 8 GB GDDR6 memory and is the only RTX 50 series GPU in the series to run on previous-gen GDDR memory. That said, with the new GPU planned for launch, [β¦]
Read full article at https://wccftech.com/nvidia-geforce-rtx-5050-with-9-gb-gddr7-memory-rumored-to-debut-at-computex/

Superpollutants have been responsible for close to half of planetary warming, and without action theyβll continue to warm the planet rapidly in the decades to come.Thatββ¦ 
How content is structured in an article or blog post might not seem controversial. But, apparently, Google doesnβt want you to create bite-sized chunks of content simply to please LLMs. Called βchunking,β this technique helps get your content noticed by AI models and reflects how readers actually engage with online content.
Chunking may make content more retrievable or citable in AI search, but ultimately, it improves the flow of content and makes concepts easier for people to understand. Letβs talk about how chunking works and when to use it.
Chunking is the practice of organizing text into distinct, self-contained units of meaning. When content is chunked, information is segmented so each paragraph focuses on a single idea and contains everything the reader needs to understand the basics of that idea simply and quickly.Β
Someone should be able to read a single paragraph and grasp the concept without having to hunt for context in the surrounding words.Β
The recent criticism from Google suggests that the practice of chunking over-optimizes content, specifically so that it will show up in AI answers. The idea that people are writing specifically for AI assumes that whatβs good for AI is somehow bad for human readers.
But really, chunking helps communicate ideas for both readers and search retrieval systems. When content is chunked, it doesnβt dumb down or artificially fragment ideas. It organizes information to match how people actually read online content, making articles easier to scan.Β
Chunking also helps AI systems because they operate at the passage level rather than the page level. For example, when a system needs to identify an answer for βhow to measure keyword cannibalization,β a heading that says exactly that, followed by a focused paragraph, would create a clear match.
In contrast, when an answer to that same question is buried in a dense paragraph covering three other topics, that information gets diluted. The AI might see relevant keywords, but if the text meanders between ideas, it will have a lower confidence that the passage definitively answers the query.
Clear structure creates clear meaning.
Chunking helps both readers to scan content and AI systems to accurately identify what your content says.Β
Dig deeper: Chunk, cite, clarify, build: A content framework for AI search
When writing from scratch, integrate chunking into your process from the start.
However, it may not be worth your time to edit existing content solely to chunk it. You may find that some articles already follow chunking principles, even if they werenβt explicitly planned to do so. Others may be out of date or poorly structured, requiring more substantial rewrites.
If you want to chunk existing content, prioritize pieces that:
Skip chunking edits for content that:
If you have content that is impactful because it creates an emotional arc, chunking or breaking it down into discrete chunks could hurt the piece. If your content succeeds by carrying readers through a journey rather than letting them jump to an answer, preserve that flow.
For example:
Dig deeper: Chunks, passages and micro-answer engine optimization wins in Google AI Mode
A chunk in a piece of content should be long enough to explain one thought. This often results in shorter paragraphs β the defining feature is a singular focus, not the word count.Β
These focused paragraphs sit under clear headings. The heading tells the reader what to expect, and the chunks beneath it deliver on that expectation.Β
To include chunking in your writing, the most effective approach is to integrate it from the start.Β
Define for yourself or other writers which ideas or concepts in a given topic constitute a chunk, focusing on paragraphs and heading descriptions.
If using content briefs, make it clear in your outlines that each H2 or H3 should cover one complete concept and the content under that heading should fully explain the concept.Β
Focus your efforts on high-value pages first when editing existing content. Prioritize pages that receive traffic but struggle with engagement or pages that rank well but arenβt being cited.
Donβt let Google convince you that chunking is a hack. Chunking makes content work better for everyone and everything β from readers scanning for specific information to AI systems matching queries to answers.Β
Dig deeper: How to build a context-first AI search optimization strategy

Youβve probably heard developers talk about the DOM. Maybe youβve even inspected it in DevTools or seen it referenced in Google Search Console.
But what, exactly, is it? And why should SEOs care? Letβs take a look at what it is, why itβs important, and how to best optimize it.
The Document Object Model (DOM) is a browserβs live, in-memory representation of your webpage. It acts as the interface that allows programs like JavaScript to interact with your content.
The DOM is organized as a hierarchical tree, similar to a family tree:
<body>, <p>, and <a> become branches (or βnodesβ).This hierarchy is critical because it allows the browser (and search engines) to understand the relationship between different parts of your content. For example, proper hierarchical order lets your browser understand that a specific paragraph belongs to a specific heading.
The DOM itself is actually a JavaScript object structure stored in memory, but browsers show it to you as markup that looks very much like HTML.
You can see this HTML representation of the DOM by right-clicking on a page and selecting Inspect > Elements. This is called the Elements panel. Iβve outlined it in the red box below:Β

In the Elements panel inside DevTools, you can:
Note that DevTools doesnβt necessarily show you what Googlebot sees. Iβll circle back to what that means later in this article.
To understand why the DOM often looks different from your HTML file, you first need to understand how the browser creates it. That begins with your browser building the DOM tree.Β
When your browser requests a page, the server sends back an HTML file. The browser reads this response line by line and translates it into βtokensβ (tags like <html>, <body>, <div>).
These tokens are then converted into distinct βnodes,β which serve as the building blocks of the page. The browser links these nodes together in a parent-child hierarchy to form the tree structure.
You can visualize the process like this:

Itβs important to know that the browser simultaneously creates a tree-like structure for CSS, known as the CSS Object Model (CSSOM), which allows JavaScript to read and modify CSS dynamically. However, for SEO, the CSSOM matters far less than the DOM.
JavaScript often executes while the tree is still being built. If the browser encounters a <script> tag (without defer or async attributes, which allow for the script to load asynchronously), it pauses construction, runs the script, and then finishes building the tree.
During this execution, scripts can modify the DOM by injecting new content, removing nodes, or changing links. This is why the HTML you see in View Source often looks different from what you see in the Elements panel.
Hereβs an example of what I mean. Each time I click the button below, it adds a new paragraph element to the DOM, updating what the user sees.

Your HTML is the starting point, a blueprint, if you will, but the DOM is what the browser builds from that blueprint.
Once the DOM is created, it can change dynamically without ever touching the underlying HTML file.
Dig deeper: JavaScript SEO: How to make dynamic content crawlable
Modern search engines, such as Google, render pages using a headless browser (Chromium). This means that they evaluate the DOM rather than just the HTML response.
When Googlebot crawls a page, it first parses the HTML, then uses the Web Rendering Service to execute JavaScript and take a DOM snapshot for indexing.
The process looks like this:

However, there are important limitations to understand and keep in mind for your website:
Looking ahead to a world thatβs becoming more AI-dependent, AI agents will increasingly need to interact with websites to complete tasks for users, not just crawl for indexing.
These agents will need to navigate your DOM, click elements, fill forms, and extract information to complete their tasks, making a well-structured, accessible DOM even more critical than ever.
The URL inspection tool in Google Search Console shows how Google renders your pageβs DOM, also known in SEO terms as the βrendered HTML,β and highlights any issues Googlebot might have encountered.Β
This tool is crucial because it reveals the version of the page Google indexes, not just what your browser renders. If Google canβt see it, it canβt index it, which could impact your SEO efforts.
In GSC, you can access this by clicking URL inspection, entering a URL, and selecting View Crawled Page.
The panel below, marked in red, displays Googlebotβs version of the rendered HTML.

If you donβt have access to the property, you can also use Googleβs Rich Results Test, which lets you do the same thing for any webpage.
Dig deeper: Google Search Console URL Inspection tool: 7 practical SEO use cases
The shadow DOM is a web standard that allows developers to encapsulate parts of the DOM. Think of it as a separate, isolated DOM tree attached to an element, hidden from the main DOM.
The shadow tree starts with a shadow root, and elements attach to it the same way they do in the light (normal) DOM. It looks like this:

Why does this exist? Itβs primarily used to keep styles, scripts, and markup self-contained. Styles defined here cannot bleed out to the rest of the page, and vice versa. For example, a chat widget or feedback form might use shadow DOM to ensure its appearance isnβt affected by the host siteβs styles.
Iβve added a shadow DOM to our sample page below to show what it looks like in practice. Thereβs a new div in the HTML file, and JavaScript then adds a div with text inside it.

When rendering pages, Googlebot flattens both shadow DOM and light DOM and treats shadow DOM the same as other DOM content once rendered.
As you can see below, I put this pageβs URL into Googleβs Rich Results Test to view the rendered HTML, and you can see the paragraph text is visible.

Follow these practices to ensure search engines can crawl, render, and index your content effectively.
Your most important content must be in the DOM and appear without user interaction. This is imperative for proper indexing. Remember, Googlebot renders the initial state of your page but doesnβt click, type, or hover on elements.
Content that is added to the DOM only after these interactions may not be visible to crawlers. One caveat is that accordions and tabs are fine as long as the content already exists in the DOM.
As you can see in the screenshot below, the paragraph text is visible in the Elements panel even when the accordion tab has not been opened or clicked.

As we all know, links are fundamental to SEO. Search engines look for standard <a> tags with href attributes to discover new URLs. To ensure they discover your links, ensure the DOM shows real links. Otherwise, you risk crawl dead ends.
You should also avoid using JavaScript click handlers (e.g., <button onclick="...">) for navigation, as crawlers generally wonβt execute them.
Like this:Β

Use heading tags (<h1>, <h2>, etc.) in logical hierarchy and wrap content in semantic elements like <article>, <section>, and <nav> that correctly describe the siteβs content. Search engines use this structure to understand pages.
A common issue with page builders is making DOMs full of nested <div> elements without semantic meaning. This does little to help search engines understand your page and sets up problems for you or future devs trying to maintain the code on your site.
Ensure to maintain the same semantic standards youβd follow in static HTML.
Hereβs a snippet of semantic HTML as an example:
<!-- Semantic HTML -->
<nav>
Β Β <ul>
Β Β Β Β <li><a href="/">Home</a></li>
Β Β Β Β <li><a href="/about">About</a></li>
Β Β </ul>
</nav>
Hereβs an example of βdiv soupβ HTML thatβs non-semantic and harder for search engines and assistive technologies to understand.
<!-- Non-Semantic HTML -->
<div class="nav">
Β Β <div class="nav-list">
Β Β Β Β <div class="nav-item"><a href="/">Home</a></div>
Β Β Β Β <div class="nav-item"><a href="/about">About</a></div>
Β Β </div>
</div>
Keep the DOM lean, ideally under ~ 1,500 nodes, and avoid excessive nesting. Remove unnecessary wrapper elements to reduce style recalculation, layout, and paint costs.
Hereβs an example from web.dev of excessive nesting and an unnecessarily deep DOM:
<div>
Β Β <div>
Β Β Β Β <div>
Β Β Β Β Β Β <div>
Β Β Β Β Β Β Β Β <!-- Contents -->
Β Β Β Β Β Β </div>
Β Β Β Β </div>
Β Β </div>
</div>
While DOM size is not a Core Web Vital itself, excessive and deeply nested DOMs can indirectly impact performance, especially on lower-end devices.
To mitigate these impacts:
A workable understanding of the DOM can help you not only diagnose SEO issues, but also effectively communicate with developers and others on your team.
We know that the DOM impacts Core Web Vitals, crawlability, and indexing. As AI agents increasingly interact with websites, DOM optimization becomes more critical. Itβs important to master these fundamentals now to stay ahead of evolving search and AI technologies.



Akievo brings AI-powered project planning to everyone. Tell Akievo what you want to achieve, whether it's a wedding, a business idea, a financial plan, or any other goal. It instantly creates a complete, structured plan with tasks, timelines, and milestones. Akievo gives everyday people the power of professional project management.
Kaomojiya is the largest free collection of Japanese kaomoji (text emoticons) on the web. With over 3,000 kaomoji organized across 500+ categories, from happy and sad to animals, food, and seasonal events, finding the perfect expression takes just one tap. Every kaomoji is pure Unicode text, meaning it works everywhere: chat apps, emails, social media, code comments, and more. No app to install and no account to create. Just browse, tap, and paste.
Remedy Entertainment has already kicked off the marketing campaign for Control RESONANT in earnest. Following the announcement at The Game Awards 2025, the Finnish studio shared the first gameplay trailer during February's State of Play showcase. This week, they've continued to reveal more footage and information about the game. Yesterday, we reported that Remedy has promised a base minimum of 60 frames per second on all platforms, and that the combat gameplay shown demonstrated fast-paced melee-focused action akin to Devil May Cry or Bayonetta. Now, YouTuber Hidden Machine reports that Remedy confirmed Jesse Faden will not be playable at all [β¦]
Read full article at https://wccftech.com/jesse-faden-isnt-playable-control-resonant/

Appleβs updated MacBook ProΒ lineup with M5 Pro and M5 MaxΒ options is available to pre-order, but not every customer wanting a portable Mac will keep track of the companyβs announcements, as some of them had already placed an order for some M4 MaxΒ versions just days before the unveiling. While we can picture their shock when Apple announced its new chipsets, whatβs an even bigger surprise is that the company outright canceled those M4 Max MacBook Pro orders, only to update them with M5 Max variants. Now thatβs top-tier customer service. One lucky customer who placed an order for the M4 MacBook [β¦]
Read full article at https://wccftech.com/m4-max-macbook-pro-orders-getting-canceled-by-apple-replaced-by-m5-max-versions/

Looks like CXMT is also producing LPCAMM2 memory modules and not just the regular DDR4 and DDR5 DRAM chips. CXMT is Reportedly Making LPCAMM2 Memory Modules, and Lenovo's ThinkBook 2026 is Likely the First Device to use it Chinese memory maker, CXMT, is reportedly also making memory modules apart from producing memory chips. Unlike other smaller players, CXMT isn't just limiting itself to DDR4 and DDR5 memory chips but is reportedly assembling modules for mobile devices. The company is now reportedly making the new LPCAMM2 memory modules, which replace the soldered memory chips in laptops. We recently reported on theΒ Lenovo [β¦]
Read full article at https://wccftech.com/lenovo-thinkbook-2026-reportedly-utilizes-cxmts-lpcamm2-memory-modules/

Today weβre opening the Google AI Center Berlin. This new space will serve as a hub for leading AI researchers and developers from Google DeepMind, Google Research and Gβ¦ 
Thereβs a growing problem in SEO and content marketing that doesnβt get talked about enough: everything is starting to sound the same. The same phrasing and structure, the same bland tone, the same safe language, the same robotic rhythm.
The web is filling up with perfectly optimized content that no one actually enjoys reading. And thatβs the real risk. Not that AI will replace SEOs, Google will penalize AI content, or automation will destroy search.
The real danger is that brands lose their voice, their personality, and their identity in the name of efficiency.
AI should make your SEO better, not blander. Faster, not flatter. Scalable, not soulless.
Hereβs how to use AI without turning your brand into beige wallpaper β and without losing what makes it worth ranking in the first place.
AI doesnβt replace a marketing plan, positioning model, or clear brand direction. It supports them. In the same way that tools like Google Analytics, Semrush, and Screaming Frog help you understand whatβs happening, AI helps you work more efficiently and supports thinking.
If your SEO strategy is simply, βWe use AI,β you donβt have a strategy. You have a software subscription. Without a clear understanding of your audience, what they care about, the problems theyβre trying to solve, how they speak, what tone they respond to, and what your brand stands for, AI will just produce generic content at scale.
AI is genuinely good at certain parts of SEO, particularly areas that rely on scale, structure, and data processing. These include:
This is where AI earns its place. It handles repetitive manual work, speeds up research, reduces basic human error, and helps teams operate more consistently at scale. None of that is threatening. Itβs simply practical.
Used properly, AI removes friction from SEO work and gives teams more space to focus on strategy and decision-making. The problems begin when people expect AI to execute SEO work it isnβt built for, treating it as a shortcut rather than a support system. When used this way, the output inevitably falls short of expectations.
Dig deeper: How to train in-house LLMs on your brand voice
AI struggles with the parts of marketing that build trust. Emotional intelligence, cultural awareness, tone, humor, empathy, and genuine understanding are difficult for it to replicate. It doesnβt truly grasp brand positioning, long-term thinking, or commercial judgment, and it canβt make ethical decisions in any meaningful way.
It can copy patterns, but it doesnβt understand meaning. It can recreate tone, but it doesnβt feel it. It can build structure, but it doesnβt create identity.
Thatβs why so much AI content feels fine but ultimately forgettable. It does the job, ticks the boxes, answers the question, follows SEO rules, and hits the word count. But it doesnβt create a connection that turns traffic into trust, and trust into customers.
The biggest risk with AI in SEO isnβt penalties or algorithm changes. Itβs gradual brand dilution. Over time, content becomes more neutral, more generic, and less distinctive.
Visibility may stay the same, but identity weakens. Traffic grows, but loyalty doesnβt. Performance looks healthy, but trust doesnβt compound.
Effectively using AI in SEO requires role clarity. Let AI handle the structure and scale, but keep meaning firmly in human hands.Β
AI is well-suited to researching, analyzing, clustering, outlining, drafting frameworks, data processing, repetitive optimizing, and detecting patterns. These are process-driven tasks where automation adds real value.
However, everything that defines the brand and the relationship with the audience β voice, tone, storytelling, personality, trust building, emotional connection, commercial messaging, ethical judgment, and real audience understanding β should remain a human endeavor.
AI can help you build faster, but it shouldnβt decide what youβre building. It supports the process, but the design still belongs to you.
Dig deeper: How to blend AI and human input in your content approach
If you donβt define your brand voice, AI will default to something neutral and generic. That doesnβt happen because the technology is broken. It happens because you havenβt given it anything clear to work with.Β
Before using AI for content, clarify:
Many people assume better prompts can fix weak content. But prompts, no matter how detailed, donβt replace thinking, brand clarity, audience understanding, or positioning.
You can write the most detailed prompt in the world, but if your brand identity is fuzzy, the output will still be fuzzy. AI amplifies whatever you input, whether thatβs clarity or chaos. Thereβs no middle ground.
Dig deeper: Content marketing in an AI era: From SEO volume to brand fame
Hereβs what works in the real world and not just in tool demos.
Google doesnβt care whether content is AI-generated. It evaluates whether the content is useful, helpful, original, trustworthy, and valuable.
Low-quality human content gets punished. Low-quality AI content gets punished. High-quality content wins, regardless of who or what created it.
The myth that βAI content gets penalizedβ misses the point. What actually gets penalized is bad content, and AI simply makes it easier to produce bad content faster.
The brands that will lead SEO over the next few years wonβt be the ones with the biggest AI tech stacks. Theyβll be the ones that combine human strategy with AI efficiency, clear positioning with scalable systems, and strong brand voice with intelligent automation. Theyβll use AI to move faster, but not to think for them.
Brands with clarity and identity will strengthen their position. Brands without them will simply become louder without standing out.
Dig deeper: How to balance speed and credibility in AI-assisted content creation


Every once in a while, a product launch doubles as a marketing masterclass. Recently, Selena Gomezβs Rare Beauty released a new fragrance, and it wasnβt just the scent that captured attention. It was the bottle. Designed with accessibility in mind, the easy-to-use packaging quickly became the story, sparking conversations and praise from accessibility advocates and consumers alike.
The takeaway for marketers is hard to miss. An inclusive design decision became the campaign itself, delivering more cultural impact than any ad spend could buy. And the lesson for marketers is equally clear: accessibility drives loyalty, enhances brand reputation, ensures compliance, and acts as a measurable growth driver.
Rare Beautyβs commitment to accessibility wasnβt a one-off. From packaging to pricing to its ongoing mental health advocacy, the brand has consistently embedded inclusivity into its DNA. That authenticity matters. Consumers can tell the difference between a stunt and a strategy, and they reward brands that lead with values.
And Rare Beauty isnβt alone. Across industries, leading brands are increasingly surfacing accessibility as a differentiator, not a footnote. Apple has consistently highlighted accessibility features as part of its core product storytelling, positioning them as innovation rather than accommodation. Microsoft has done the same by showcasing inclusive design in mainstream campaigns, including adaptive gaming products that reframed accessibility as a driver of creativity and connection. In fashion and retail, brands like Tommy Hilfiger and Unilever have brought adaptive design into the spotlight, integrating accessibility into product launches and brand identity rather than siloing it as a niche offering.
According to studies from Edelman and McKinsey, 73% of Gen Z choose to buy from brands they believe in, and 70% say they try to purchase products from companies they consider ethical. These arenβt fringe preferences, theyβre mainstream expectations that can redefine how marketers approach building trust and growth with their audiences.
More than 1.3 billion people globally live with a disability, and together with their friends and family, they control over $18 trillion in spending power, according to the Return on Disability Group. For marketers, this isnβt just about compliance. Itβs about growth, reputation, and building genuine trust in one of the worldβs largest and most passionate consumer groups. That passion translates to powerful advocacy.Β
In discussions with AudioEyeβs A11iance Team, a group of individuals with disabilities who regularly share feedback on real-world accessibility experiences, one member stated, βIf I find a website that works and works very well for me, I will always recommend it to friends and family because I want people to have the same experience that I have.β
As another A11iance Team member, Maxwell Ivey, put it, βThe cheapest form of advertising is word of mouth, and people with disabilities can have some of the loudest voices when we find people willing to make the effort. Because itβs that sincere effort over time that really counts with us.β
When accessibility becomes part of the customer experience, it creates something money canβt buy: trust and loyalty that scale through advocacy. But the opposite is also true. In a survey of assistive technology users, 54% said they donβt feel eCommerce companies care about earning their business.
Most brands are still competing for the same oversaturated demographics while overlooking this opportunity hiding in plain sight. In doing so, theyβre leaving loyalty, advocacy, and revenue on the table.
Hereβs where many brands stumble: accessibility usually stops at the shelf. Marketers invest heavily in packaging, store displays and product design, while digital experiences, the first and often primary touchpoint for customers, lag behind.
As accessibility-led design continues to earn attention, loyalty and earned media, the gap between physical product innovation and digital experience has become harder to ignore.
AudioEyeβs 2025 Digital Accessibility Index found an average of 297 accessibility issues per web page detectable by automation alone. Each one represents friction in the customer journey, a conversion lost, or a compliance risk under frameworks like the Americans with Disabilities Act (ADA) and the European Accessibility Act (EAA).
Just as no campaign would launch without a brand review or legal check, no digital touchpoint should go live without an accessibility review.
Too often, accessibility is treated as a risk to manage instead of an advantage to leverage. The marketers who win will be the ones who flip that script. Here are four actions to start with.
Donβt hide it, lead with it. Brands like Rare Beauty have proved that inclusive design is the story. Build campaigns where accessibility isnβt a footnote but the differentiator that captures attention and loyalty.
Accessibility shouldnβt sit off to the side. Make Web Content Accessibility Guidelines (WCAG) alignment part of your brand guidelines, right alongside typography, logos and tone of voice. When accessibility is codified, it becomes second nature across every campaign.
Marketers are storytellers, and numbers seal the story. Track accessibility improvements such as fewer user-reported barriers, higher accessibility scores and fixes like improved alt text, color contrast or form usability. Connect those metrics to existing business outcomes like conversion, reach, and sentiment to show how accessibility drives ROI, not just compliance.
Just as youβd never risk brand safety in ad placements, donβt risk it in your digital touchpoints. Every update, seasonal campaign, or product drop should be monitored for accessibility. Trust and reputation are too valuable to leave exposed.
Rare Beautyβs fragrance launch proved something powerful: when you lead with accessibility, the story writes itself. The loyalty builds authentically, and the momentum flows naturally.
But hereβs the opportunity: most brands still donβt get it. Theyβre treating accessibility as a compliance checkbox instead of the growth strategy it really is.
For marketers, thatβs the wake-up call. Accessibility builds loyalty. It enhances brand reputation. It keeps your brand compliant. And it drives measurable growth across marketing efforts.
Rare Beauty showed how accessibility can capture attention at the shelf. The next opportunity is making sure it carries through online. Because when every touchpoint welcomes everyone, every campaign maximizes its impact.

We built Gloomin to make screen recording feel simple again, fast, lightweight, and free from the clutter of heavy video tools. Gloomin is a lightweight, Chrome-first tool that helps you record your screen, capture screenshots, annotate, blur, make quick edits, and share instantly. There are no complex timelines or setupβjust hit record, explain visually, and move on. Itβs designed for everyday async communication, not video production.
Although the Steam Machine has yet to be released, the new Valve hardware has sparked quite a bit of discussion among gaming enthusiasts. While some believe it will end up being a niche system, others believe Valve's gaming system could send the console market into an upheaval, especially if it is priced right. With the current RAM prices, however, it remains to be seen how Valve will handle this critical element (and its release date), and with things unlikely to improve anytime soon, it's looking less likely that the system could be released within the price range of a traditional [β¦]
Read full article at https://wccftech.com/steam-machine-biggest-competitor-playstation/

Like many of the wrestlers the franchise lets you play as, the WWE 2K games have been on top of the world, down and out, and everything in between. Last year's WWE 2K25 was not a career high point as the game focused heavily on The Island, a new online hub in the vein of NBA 2K's The City, to the exclusion of almost everything else. Thankfully, as I described in my hands-on impressions, WWE 2K26 seems to be spreading the love a bit more with new features. Does WWE 2K26 light a fire under the franchise? Or is it [β¦]
Read full article at https://wccftech.com/review/wwe-2k26/



Capcom releases its first PC patch for Resident Evil Requiem Capcom has officially released its first PC patch/update for Resident Evil Requiem. This update includes stability fixes that make the game less likely to freeze or crash on certain hardware configurations. Additionally, this update also adds performance optimisation for RTX 40 and RTX 5o series [β¦]
The post Resident Evil Requiem PC patch optimises RTX 40/50 performance appeared first on OC3D.
Ubisoft confirms that more than three Assassinβs Creed games are in development Itβs official, Ubisoft has confirmed that Assassinβs Creed Black Flag Resynced is real. Ubisoftβs worst-kept secret is finally official. The Assassinβs Creed series is finally returning to the seven seas. For now, Ubisoft has only released the following statement and the artwork below. [β¦]
The post Ubisoft officially confirms Assassinβs Creed Black Flag Resynced appeared first on OC3D.
Today, Bloomberg reports an indirect threat for the Nintendo Switch 2 console coming from the ongoing memory storage crisis. As you probably are already aware of if you're a regular Wccftech reader, the global rush to build artificial intelligence hardware has brought major increases in the price of NAND flash memory: NAND contract prices are forecast to surge by up to 90% in this quarter compared with the previous three-month period, when prices had already spiked by over 30%. This is an issue for the Nintendo Switch 2 because of its limited storage. The hybrid console only has 256GB of [β¦]
Read full article at https://wccftech.com/switch-2-nintendo-games-vehicle-third-parties/

Appleβs low-cost iPhone launch strategy lived on with the arrival of the iPhone 16e, with the company making the required efforts to announce the iPhone 17eΒ just a year later. Naturally, to save up on production costs as much as possible and in light of the DRAM shortage, there isnβt expected to be much of a cosmetic difference, but you might be interested in whatβs underneath the hood. Hereβs everything you need to know about the iPhone 17eβs launch, features, and specifications. Design and display Unfortunately, those looking forward to the Dynamic Island change on the iPhone 17e will be disappointed, [β¦]
Read full article at https://wccftech.com/roundup/apple-iphone-17e-launch-price-features-specifications-roundup/

Monitor, sync & back up your git repos from the menu bar
Nvidia addresses overclocking concerns and other issues with their latest GeForce Hotfix Nvidia has officially released its GeForce Hotfix 595.76 driver, a new driver version that addresses GPU overclocking issues and other problems raised by Nvidiaβs GeForce 595.71 driver. With their GeForce 595.71 driver, overclockers found that their GPUs suddenly became unstable. Voltages were capped, [β¦]
The post Nvidia fixes overclocking and Resident Evil Requiem issues with GeForce Hotfix 595.76 appeared first on OC3D.
Conduit AI answers missed calls with a natural voice, captures caller details and job specifics, and emails you a qualified lead within seconds. It connects to your existing number via call forwarding and provides 24/7 coverage for after-hours, weekends, and overflow. You can use the client dashboard to track calls, conversion rates, and revenue recovered. Train the agent for your industry, prioritize emergencies, and scale during surges so you can call back quickly and close more deals.
AMD's Ryzen 9 9900X 12-core CPU drops to $373.80 at Amazon, saving you $125 (25%) off the list price.
Read full article at https://wccftech.com/amd-ryzen-9-9900x-drops-to-374-at-amazon-25-off/

Open-world automations, managed in plain English
Enterprise MCP Control Plane
Self-host your community platform
Cursor for document work
Studio-quality voice AI that runs locally on your desktop.
Leave ChatGPT while keeping everything it learned about you
Manage your App Store Connect from macOS desktop app
Private voice-to-text for macOS. Hold a key, speak, done.
Codex now runs natively on Windows with secure sandbox
Kill the keyboard for your team with voice AI
The magic of Mac at a surprising price
Hook. Body. CTA. Know exactly where your ad fails.
Tappable visual stories instead of ChatGPT text walls
Free macOS notch app with 12+ modules & custom SDK
Frontier open-source MoE model built for OpenClaw agents
An amazing Mac at a surprising price
Turn your course into a full suite of embeddable AI agents


DTC Skills offers production-ready AI skills and agents built for eCommerce operators. Each skill connects to your Brand Brain, the Commerce Intelligence System, ensuring that product pages, emails, ads, and support content match your voice and drive revenue. It runs on Claude Code, Cursor, and Windsurf, and integrates with Shopify, Klaviyo, Meta Ads, and Google Ads.
You can browse, install, and ship real work quickly, or choose managed agents for hands-off execution. Creators can list skills and sell them to DTC operators.
Batch52 helps small-batch makers sell online without relying on marketplaces. You can launch a mobile-ready shop on your own subdomain, list products with photos, and accept payments through credit cards, Venmo, Zelle, or PayPal. Track orders and payments at a glance with automatic confirmations. Use Batch Days to run pre-order drops with capacity limits and pickup windows that automatically close on sellout. You keep your own Stripe account, can export customers anytime, and pay no commissions.
FerretForge is a universal development platform for building, scanning, and shipping AI agent skills that work across Claude Code, Cursor, Windsurf, and more platforms. It scores security and intelligence, flags prompt injection and credential leaks, and converts skills between formats with fidelity checks. Use the editor, AI Coach, and Academy to author, improve, and share reusable skills. Connect via MCP server, CLI, REST API, or VS Code to automate workflows, track history, and collaborate with projects and version control.
Google removed its JavaScript accessibility guidance from help documents, saying the advice is outdated and noting it has rendered JavaScript for years.
The post Google Removes JavaScript SEO Warning, Says Itβs Outdated appeared first on Search Engine Journal.
VeritasLinks is a unified platform for AI perception intelligence and Generative Engine Optimization. It maps your digital footprint across forums, social platforms, reviews, and web sources to reveal how AI systems and real buyers categorize and recommend your brand. You will be benchmarked against competitors.
Use built-in tools to close visibility gaps quickly. These include AI focus groups, blog and FAQ generators, landing pages, video scripts, and promotional videos. You can track a Visibility Score, receive actionable steps, and deploy assets that improve trust, search visibility, and conversions.
The potential rollout would be available to companies with at least 50 employees, with the move suggesting that a paid business plan is on the horizon.
Β
New data released with GWI found that digitally native users are consuming business content outside of traditionally professional online environments.
Issues at a local data center in Ashburn, Virginia, led to more timeouts, errors and latency issues β and user speculation about government interference.
The platformβs new capabilities include more detailed earnings and payout data to help creators better manage their AdSense accounts.

The app told the BBC that using that level of security would impede police investigations and said its stance sets it apart from rivals.
New data reveals the importance of in-app promotional tools such as influencer recommendations and artificial intelligence suggestions.
WordPress releases three plugins for integrating Claude, Gemini, or OpenAI into websites
The post WordPress Releases AI Plugins For Anthropic Claude, Google Gemini, And OpenAI appeared first on Search Engine Journal.
Nvidiaβs upgrading its RTX 5050 in the strangest way possible Rumour has it that Nvidia plans to upgrade its RTX 5050 graphics card with a new memory configuration, giving gamers 9GB of GDDR7 memory across a downgraded memory bus. By shifting from 2GB GDDR6 memory chips to fewer, but faster, 3GB GDDR7 memory chips, Nvidia [β¦]
The post New Nvidia RTX 5050 9GB GPU planned with crazy memory config appeared first on OC3D.
Reviews of the Pixel 10a describe it as a minor update over last year's model, with mostly cosmetic changes and similar hardware. Despite the limited changes, critics say it remains one of the best budget Android phones thanks to its reliable performance, strong cameras, and competitive $500 price point.
LovenWork connects ambitious professionals in two modes: Work and Love. Use Work Mode to find co-founders, freelancers, and mentors, matched by skills, goals, and work style. Switch to Love Mode to meet partners aligned on values, lifestyle, and chemistry. The algorithm pairs complementary minds, and professional verification via work email, LinkedIn, or portfolio keeps the community authentic. Toggle anytime as your intentions change and pursue both career growth and meaningful relationships.
Rambus has announced the development of its fastest HBM controller yet, based on the HBM4E standard, offering up to 16 Gbps transfer speeds per pin. Ready For Next-Gen AI Data Center Superchips, Rambus Intros HBM4E Memory Controller As expected, Rambus has developed the world's fastest HBM4E memory controller, offering a 60% boost over its HBM4 controller with up to 16 Gbps pin speeds (vs 10 Gbps on HBM4) and up to 4.1 TB/s of total bandwidth per module (vs 2.56 GB/s on HBM4). The HBM4E standard will be utilized by NVIDIA's Rubin Ultra GPUs and AMD's MI500 series accelerators. Press [β¦]
Read full article at https://wccftech.com/rambus-hbm4e-memory-controller-60-percent-faster-vs-hbm4-at-4-1-tbps/

PulseKit turns your key metrics into widgets across your Apple devices. Instead of opening dashboards to check if something changed, the numbers you care about stay visible on your Home Screen. It works with tools like Product Hunt, LinkedIn, Discord, DeFiLlama, and more integrations are on the way. Itβs not a dashboard replacement; itβs the layer before the dashboard.
Appleβs A18 Pro from 2024 utilizes TSMCβs InFO-POP (Integrated Fan-Out Package on Package) technology, meaning that the DRAM sits on top of the die as part of the silicon. The technology giant re-purposed the same SoC and incorporated it into the MacBook Neo, which is why the latter is limited to 8GB of RAM. For those genuinely interested in upgrading to the $599 portable Mac but are discouraged because of the inadequate memory, the way the A18 Pro has been designed prevents this upgrade. While it was still possible for Apple to introduce more memory, it would be at a [β¦]
Read full article at https://wccftech.com/macbook-neo-8gb-ram-limitation-is-due-to-the-a18-pro-packaging/

Build a Rocket Boy, the studio co-founded by Mark Gerhard and Leslie Benzies, the latter of which was a long-time producer at Rockstar who worked on several Grand Theft Auto titles, has announced another round of mass layoffs after its debut project, MindsEye, was one of 2025's catastrophic disasters. In a statement posted to the studio's LinkedIn page, Gerhard calls the layoffs "a deeply painful decision," and that "letting colleagues go is never something any leader wants to do, and I know the impact this will have on individuals, families, and our wider community." That said, in the face of [β¦]
Read full article at https://wccftech.com/mindseye-build-a-rocket-boy-announces-more-layoffs-ceo-blames-organized-espionage-corporate-sabotage/

EA and Respawn are trying to bring the hammer down hard on cheaters in Apex Legends by making some changes in how it'll respond to players caught using devices like Strikepacks, Cronus, and XIM. If you are caught using any of those devices or devices like them, you will be permanently banned from Apex Legends. Moreover, Respawn has said that it "will not entertain appeals for leniency on accounts confirmed cheating." So if Respawn confirms that you've been cheating, you will have no recourse for getting back into the game, and just creating a new account will simply just restart [β¦]
Read full article at https://wccftech.com/ea-respawn-permanently-ban-apex-legends-players-using-strikepacks-cronus-xim/

Google signs the White House Ratepayer Protection Pledge and shares details about protecting ratepayers, the electric grid and creating jobs. Google updated AI Mode to change how it displays recipes and links to the creators
The post Google Updates AI Mode Recipe Sites Results In Response To Backlash appeared first on Search Engine Journal.
Epic Games boss Tim Sweeney has come out on top once again in his dispute with how platform holders dictate the relationship that developers have with their users. After starting disputes with both Apple and Google years ago, which initially took popular Epic Games titles like Fortnite off Android and iOS mobile platforms, Epic and Sweeney's dispute with Google has finally come to a close. To be clear, Fortnite has already made its return to iOS and Android when Epic was finally able to launch a mobile version of the Epic Games Store worldwide back in 2024 (though Australians had [β¦]
Read full article at https://wccftech.com/fortnite-will-return-google-play-store-on-android-epic-games-ends-dispute-with-google/

Intel's CFO, David Zinsner, took the stage at the Morgan Stanley conference, and based on his comments on the foundry front, Team Blue looks a lot more confident about division breakeven. Intel's 18A-P & 14A Will Prove to Be Effective Solutions For External Customers; Packaging To Bring 'Billions' In Revenue Under CEO Lip-Bu Tan, Intel has been entering the foundry market at a time when the AI frenzy significantly drives customer demand. One of the more significant achievements under Tan was the successful ramp-up of Panther Lake, and according to Zinsner, 18A has delivered on expectations, with yield rates improving [β¦]
Read full article at https://wccftech.com/intel-foundry-breakeven-target-for-2027-now-looks-a-lot-more-real/

The latest NVIDIA GPU driver has reportedly mitigated the performance issues users were facing on the latest Resident Evil title. NVIDIA's Sean Pelletier Confirms Game Ready Driver 595.71 Fixed Performance Issues in Resident Evil Requiem, but Users Will Need to Reboot NVIDIA Game Ready Driver 591.86, which was released at the end of January, reportedly caused trouble for gamers with the latest Resident Evil instalment. Many users on social media reported that they were seeing a significant performance impact in Resident Evil Requiem with the Driver 591.86, mostly involving GeForce RTX 40 series cards. Some users were reporting significant FPS [β¦]
Read full article at https://wccftech.com/resident-evil-requiem-performance-fixed-through-game-ready-driver-595-71/


Google is rolling out an update to AI Mode for recipe results that it hopes will make recipe bloggers happy. Googleβs Robby Stein said on X, βWeβve heard feedback on recipe results in AI Mode, and weβre making updates to better connect people with recipe creators on the web.β
The changes aim to make it easier to click over to recipe sites, though I am not 100% certain yet whether the recipe summaries turnΒ recipes into AI slop.
βStarting today, when you search for meal ideas like βeasy dinners for two,β you can tap on the dish to see links to relevant recipe sites, plus a short overview of the dish to help with inspiration,β Stein added.
What it looks like. Here is a video of it in action:
More recipe details too. Google is also adding more information to the recipe results including cook time. Google said its βtesters have found useful for deciding on a recipe.β
βWe know thereβs more work to be done on this, so stay tuned for future updates,β Robby Stein added.
Why we care. Recipe bloggers, well, content creators in general, have not been happy with how traffic from Googleβs AI experiences did not send as much traffic as the traditional search results. Here we see Google trying to make changes to encourage more searchers to click from those AI experiences to the bloggers website.
Will it make a big difference? Time will tell.

Apple unveils its new MacBook Neo, and prices start at Β£599 Apple has officially unveiled its new MacBook Neo, an all-new laptop design that aims to deliver an affordable entry point into the Mac ecosystem. With prices starting at Β£599/$599. This new MacBook features an aluminium enclosure, a 13-inch Liquid Retina display (2408Γ1504), and up [β¦]
The post Apple targets affordability with its new MacBook Neo appeared first on OC3D.
KnowIQ captures departing employees' critical knowledge through one structured interview and produces three outputs: an action list for managers, a reference document for the team, and a scorecard for leadership. Managers create the interview, the leaver answers seven targeted questions, and AI generates specific, actionable documents in under a minute.
This tool helps surface undocumented systems, plan handovers, and protect continuity so your team isn't starting from zero after someone leaves.
Ubisoft has provided a major update on what's coming next in the Assassin's Creed franchise as we near the one-year anniversary of its last major release, Assassin's Creed Shadows. Firstly, we can finally stop calling the Assassin's Creed Black Flag Remake a 'rumoured' game, because Ubisoft has finally officially confirmed the remake to be on its way and that it is indeed titled 'Assassin's Creed Black Flag Resynced,' which was the title that leaked in December 2025. In a blog post on the official Ubisoft website titled 'Assassin's Creed: Into 2026,' recently named head of AC content, Jean Guesdon, appears [β¦]
Read full article at https://wccftech.com/assassins-creed-black-flag-remake-resynced-confirmed-ubisoft-assassins-creed-franchise-future/

Cyprus-based game developer Mundfish has just unveiled the fourth and final Atomic Heart DLC. Titled Blood on Crystal, it will be released on April 16 for all platforms (PC, PlayStation 4 and 5, Xbox One and Xbox Series S and X). According to the studio, Atomic Heart fans will experience the "thrilling, high-stakes" conclusion of the first game's narrative in this DLC, in which protagonist P-3 and their comrades must venture deeper than ever before into Facility 3826 to battle CHAR-les and make a final attempt to save humanity from total annihilation. Here's an overview provided by Mundfish: Mundfish also [β¦]
Read full article at https://wccftech.com/atomic-heart-final-dlc-blood-crystal-out-next-month/

Intel's board chair, Frank Yeary, is retiring from his current position, according to the latest announcement, and the industry has responded with mixed reactions. Intel Shifts From a Finance-Centric to an Engineering-First Board Chair, Aligning With Lip-Bu's Ideologies Intel underwent a massive administrative shift back in 2025, mainly driven by the departure of former CEO Pat Gelsinger, under stringent conditions. Team Blue has always focused on advanced chip manufacturing, and, through the '5N4Y' ideology developed under Gelsinger, Intel made significant investments in its foundry division. However, Intel's efforts didn't deliver much shareholder value, mainly because there weren't significant breakthroughs from [β¦]
Read full article at https://wccftech.com/frank-yeary-the-intel-board-chair-behind-the-foundry-spin-off-push-is-retiring-this-year/

Apple has finally unveiled its much-anticipated MacBook Neo, bringing a 13-inch Liquid Retina display with a 2,408 x 1,506 resolution and 500 nits brightness, uniform bezels, Touch ID, dual-firing speakers that support Spatial Audio, a 1080p front camera, a brightly colored aluminum frame, and color-matching keyboard to the proverbial table. However, to price the MacBook Neo at a very attractive $599, Apple has had to make a lot of compromises along the way, with some that were, frankly speaking, quite unavoidable. The litany of compromises that Apple has had to make to launch the MacBook Neo at a price point [β¦]
Read full article at https://wccftech.com/here-are-all-of-the-compromises-that-apple-has-had-to-make-to-price-the-macbook-neo-at-599/

NVIDIA is preparing a new variant of its GeForce RTX 5050 graphics card, which will feature 9 GB of memory capacity. NVIDIA Working On A Second GeForce RTX 5050 Graphics Card Featuring 1 GB of Extra memory The report comes from MEGAsizeGPU who has a solid track record when it comes to NVIDIA-related GPU SKU leaks. As per the new information, NVIDIA is preparing a second GeForce RTX 5050 GPU, which would offer extra memory capacity. The new product will be named the NVIDIA GeForce RTX 5050, but unlike the existing model, which packs 8 GB of GDDR7 memory, the [β¦]
Read full article at https://wccftech.com/nvidia-preps-geforce-rtx-5050-9-gb-gddr7-memory/

In a little more than two months, we'll be able to jump into IO Interactive's latest major game, 007 First Light. The storied Hitman developer taking on a James Bond story felt like a match made in heaven when it was first revealed all the way back in 2020. As we wait to find out if we were right about IO and James Bond being as good as peanut butter and chocolate, IO Interactive has just released a new deep-dive into the game, with the studio focusing in on its version of Bond and the cast of classic 007 characters. [β¦]
Read full article at https://wccftech.com/io-interactive-digs-into-james-bond-origin-story-approach-007-first-light/

Canvas in AI Mode is now available for everyone in the U.S. Plus, it can now help you draft documents or build interactive tools.
NotebookLM is introducing Cinematic Video Overviews, a major update to its AI-powered video creation capabilities. This new feature moves beyond narrated slides in Videoβ¦
A look at the 2026 National Teacher of the Year orientation held at Googleβs Mountain View. 
Google is investigating a disruption affecting Google Ad Manager, according to an update posted on the Google Ads Status Dashboard.
The incident began at 13:49 UTC on March 4. By 13:54 UTC, Google said it was reviewing reports that some users could access Ad Manager but werenβt seeing the most up-to-date data.
Whatβs happening. The issue appears to impact reporting consistency. Specifically, Ad Exchange match rate and Ad Exchange request values are not aligning between Ad Managerβs interactive reports and the legacy reporting query tool (now deprecated).
Why we care. Reporting discrepancies in Google Ad Manager can directly impact how you evaluate performance and optimize campaigns. If Ad Exchange match rates and request data donβt align across reporting tools, it becomes harder to trust the numbers driving pacing, forecasting and revenue decisions.
What it means. Users can still log into Ad Manager, but reporting discrepancies may affect data accuracy β at least temporarily. Thereβs no indication yet of a full outage, but for publishers and advertisers relying on real-time reporting, mismatched metrics could complicate performance monitoring and optimization decisions.
Whatβs next. Google says itβs actively investigating and will provide further updates. In the meantime, affected users are advised to monitor the status dashboard and contact support if theyβre experiencing issues not listed there.

Google introduced a new availability value in Google Merchant Center β built specifically for vehicle sellers who donβt carry every model on the lot. The new attribute, βbuild to order,β lets dealers flag vehicles that arenβt physically in inventory but can be customized and ordered by customers.
What needs to change. Sellers must update two areas: their structured data (set availability to BuildToOrder) and their Merchant Center feed (set availability to build to order). Consistency between structured data and feed submissions is critical to avoid disapprovals.

[availability]Β attribute in GMCΒ Why we care. Until now, sellers had limited ways to signal that a vehicle wasnβt available for immediate pickup. The new value better reflects how many modern automakers operate β especially direct-to-consumer brands like Tesla and Rivian, where buyers configure features before production. For dealers offering factory orders or custom builds, this means clearer expectations for shoppers β and cleaner data for Google.
The fine print Vehicles marked βbuild to orderβ must have the condition attribute set to βnew.β If a listing is marked βused,β it will be disapproved β Google considers build-to-order vehicles to be newly configured, not pre-owned.
Bottom line If you sell customizable or factory-order vehicles, this update gives you a more accurate way to reflect availability β but only if your feed, structured data and condition fields are properly aligned.
First spotted. This update was shared by Google Shopping specialist Emmanuel Flossie, where he shared how to implement this update on his blog.
Dig deeper. βAvailability [availability]β Google Merchant Centre help doc

PPC platforms are asset-hungry. What began as simple text ads and keyword bidding has evolved into an AI-driven ecosystem.
Tools inside Google Ads can now remove backgrounds, generate lifestyle scenes, and even create synthetic humans in minutes. But just because the technology allows it doesnβt mean every brand should use it.
That shift forces PPC advertisers to confront difficult questions:
A brand integrity hierarchy offers a way to navigate those decisions β a four-level framework that helps determine how much AI manipulation your brand, industry, and audience can tolerate.
Generic AI ethics guidelines donβt account for the operational realities of paid search. PPC isnβt a brand storytelling channel. Itβs a high-volume, high-velocity system that demands constant image production across dozens of audiences, formats, and placements.
You must generate fresh lifestyle imagery at a pace traditional creative workflows canβt sustain.
At the same time, Google and Bing enforce strict policies around accurate product representation, especially in Merchant Center, where even minor visual inaccuracies can trigger disapprovals or account risk.
Layer on top of that the platform pressure. Google Ads added Nano Banana Pro, turning Asset Studio into an AI co-creation environment. Performance Max actively pushes you toward AI-generated backgrounds, variations, and lifestyle images to improve performance. Demand Gen and Merchant Center also now have capabilities to change product images at scale.
Most brands canβt afford the photoshoots required to keep up with this demand, yet the volume and placement of images across channels make them unavoidable if you want to compete.
This combination of policy risk, creative pressure, and platform-promoted tools is unique to PPC β which is exactly why the industry needs its own AI ethics framework.
Dig deeper: Whatβs next for PPC: AI, visual creative and new ad surfaces
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial

Definition: The product and the human exactly as they exist in reality.
Permitted activities:
PPC context: This level is fully compliant with Google and Microsoftβs βaccurate representationβ policies. Merchant Center explicitly permits technical edits that donβt alter the product itself. This is the safest zone for regulated industries such as finance, healthcare, legal services, and brands with strict authenticity standards.
Client talk-track: βWeβre using AI to make your reality look its best on every screen size. We arenβt changing what the product is, only how itβs displayed.β
Risk assessment: Zero brand risk. Zero policy risk. Maximum consumer trust.
I think about Level 1 the same way I think about working with a graphic designer in Photoshop. Youβre not changing the product, the setting, or the truth β youβre simply cleaning up what already exists.
This level is about technical refinement, not creative invention. Itβs the equivalent of adjusting lighting, removing dust, fixing a crooked crop, or correcting color balance. Nothing about the image becomes βuntrue.β Youβre enhancing reality, not altering it.
Definition: AI-generated environment, not AI-generated product.
Permitted activities:
Google Ads context: Performance Maxβs AI background generation is designed for this level. Google allows contextual enhancement as long as the product remains unchanged. This approach is useful for scaling creative variations without expensive location shoots or studio rentals.
Risks:
Client talk-track: βWeβre using AI to build a world for your product to live in. The product the customer receives is identical to the one in the ad.β
Risk assessment: Low brand risk. Low policy risk. Maintains consumer trust if executed thoughtfully.
Level 2 sits in an odd psychological space. The manipulations themselves are still low-risk. Youβre creating scenes, composites, or enhanced environments the same way a graphic designer would in Photoshop.
Brands have been doing this manually for decades. But the moment AI performs the same task, something shifts. To customers, and even to some advertisers, the exact same edit can feel more artificial simply because an algorithm did it instead of a human.
That perception gap matters.
Even when the output is identical, AI-assisted scene creation can trigger a sense of βthis looks fakeβ that traditional Photoshop work never did. Itβs irrational, but itβs real and worth acknowledging at this second tier. The actual risk is still low, but the emotional risk is higher than Level 1.
Dig deeper: AI tools for PPC, AI search, and social campaigns: Whatβs worth using now
Definition: Altering the βheroβ β the product or the person.
Activities:
PPC industry context: The platforms prohibit misleading or manipulated product imagery. Merchant Center disapprovals often occur at this level. High sensitivity exists in beauty, apparel, food, and health categories, where consumer expectations are tied directly to visual accuracy.
Recent consumer trust studies show that users feel deceived when they discover product images have been significantly altered. This is a policy concern, more so a brand reputation issue.
Half of U.S. adults (51%) believe AI-generated and edited content needs better labeling, CNET reports. One in five (21%) believe AI content should be prohibited on social media with no exceptions.
Risks:
Client talk-track: βThis is where we risk the βpress call-out.β If we remove a modelβs birthmark or make a burger look like a 3D render, we arenβt optimizing β weβre fabricating.β
Risk assessment: High brand risk. High policy risk. Potential for long-term damage to consumer trust.
Level 3 moves into territory where the image no longer reflects the real person or product. And yes, brands have been doing this in Photoshop for years, and theyβve been called out for it just as long. Thereβs precedent, and thereβs backlash.
What changes at Level 3 is scale. AI lets you make edits instantly, repeatedly, and across entire product catalogs or campaigns. The ethical risk isnβt new, but the volume and speed at which AI enables these distortions make the consequences far bigger. A single questionable Photoshop edit is one thing. Hundreds of AI-altered images pushed across every channel is something else entirely.
This is where the risk stops being theoretical and starts becoming reputational β and where paid search teams need a clarified stance.
Definition: Synthetic humans, synthetic products, or fully AI-generated scenes.
Activities:
PPC context: Synthetic humans are allowed in some formats with proper disclosure, but Merchant Center prohibits listing products that donβt exist. There is a high risk of disapproval for βinaccurate representation.β This level may be acceptable for creative testing or conceptual campaigns, but itβs dangerous as a primary brand identity.
Legal precedents regarding copyright protection for non-human-authored creative works remain murky. Using fully synthetic assets may cause challenges if ownership disputes arise or if synthetic models are mistaken for real individuals without proper disclosure.
Risks:
Client talk-track: βThis is for high-speed testing or fringe creative. If we use this for our main brand identity, we must be prepared for the βinauthenticβ label.β
Risk assessment: Critical brand risk. Critical policy risk. Use with extreme caution and full disclosure.
Level 4 is where AI stops enhancing reality and starts inventing it. The image becomes a construction. While I havenβt personally worked with brands operating at this tier, itβs absolutely where the industry could be headed, and it deserves serious consideration.
Fully fabricated imagery can mislead customers, violate platform policies, and erode trust at scale. When AI creates people, products, or environments from scratch, the line between creative expression and consumer deception becomes razor-thin. The reputational fallout from getting this wrong is far greater than anything in Levels 1 through 3.
This is the highest-risk tier because it asks a fundamental question: Are you still advertising your product or an AI-generated fiction of it?
Not every brand should operate at the same level of the brand integrity scale. Your acceptable AI usage depends on four factors.
Every brand must choose its acceptable level(s) on the scale and document it in a brand AI manifesto for PPC.
Examples:
Action: Create a PPC brand AI manifesto in collaboration with creative, legal, and executive leadership.
Two critical questions should guide every AI decision:
The press test is the real guardrail. Googleβs policies change. Public perception is permanent.
Every AI-assisted asset must be checked for:
Automated AI generation should never bypass human review, especially in regulated verticals.
Different audiences have different tolerances for AI manipulation:
Dig deeper: Why creative, not bidding, is limiting PPC performance
Implement a pre-flight checklist for AI-generated assets:
Safe placements for AI-generated assets
Unsafe placements
Legal teams should:
Industry standards and emerging frameworks, such as the Coalition for Content Provenance and Authenticity (C2PA), are establishing transparency protocols for AI-generated media. Monitor these developments and align your practices accordingly.
Some PPC professionals are already experimenting with the tools discussed in this framework.
Ameet Khabra, owner of Hop Skip Media, tested Nano Banana when it first appeared inside the Google Ads interface. She found the tool useful for ideation and quick edits, but noted that strong results often required highly specific prompts.
That level of prompt detail may be realistic for experienced advertisers, but itβs less likely for many SMBs experimenting with AI-generated assets.
Even when AI imagery is available, some advertisers remain skeptical of how it appears to audiences.
Julie Friedman Bacchini, owner of Neptune Moon, says AI-generated images often look noticeably artificial.
To understand how people outside the industry view these changes, I also polled the community on Threads.

The sentiment was strikingly consistent: while the industry focuses on efficiency, the public is increasingly wary of fantasy versus reality.
One commenter wrote:
Another described the issue more bluntly:
AI isnβt inherently deceptive. Nor is it inherently transparent. Itβs a tool. Like all tools, its ethical impact depends on how itβs used. As PPC experts with access to these technologies and advisory roles with brands, we need a clear point of view to guide these decisions.
The brand integrity scale outlined above provides a structured approach to AI use in PPC, helping you navigate the tension between automation and authenticity. By defining your brandβs position on this spectrum today, you ensure tomorrowβs campaigns are remembered for their resonance.
Adopt ethical AI standards β define your brand AI manifesto, implement the press test, and ensure every AI-generated asset passes human review before it reaches your audience. Your brandβs integrity depends on it.

Google has removed the βdesign for accessibilityβ section from within the Understand the JavaScript SEO basics documentation. Google said this was removed because the information was βout of date and not as helpful as it used to be.β
The old text said that using JavaScript for page content βmay be hard for Google to see.β But Google now says that has not been true for many years, thus why Google removed the section.
The old section. The old section read:
βDesign for accessibility: Create pages for users, not just search engines. When youβre designing your site, think about the needs of your users, including those who may not be using a JavaScript-capable browser (for example, people who use screen readers or less advanced mobile devices). One of the easiest ways to test your siteβs accessibility is to preview it in your browser with JavaScript turned off, or to view it in a text-only browser such as Lynx. Viewing a site as text-only can also help you identify other content which may be hard for Google to see, such as text embedded in images.β
Why it was removed. Google explained:
Why we care. While Google Search can handle JavaScript super well, it is still important for you to double check what Google Search sees by using the URL inspection tool within Google Search Console.
Keep in mind, Google can handle JavaScript very well, Microsoft Bing likely can as well. But many of the new AI engines might not be able to render JavaScript as well as Google or Bing.

Google is communicating that starting April 1st, Customer Match uploads through the Google Ads API will stop working for certain users, in a message sent to API developers.
Specifically, developers who havenβt uploaded Customer Match data in the past 180 days using their developer token will no longer be able to do so via the Ads API.
Whatβs changing. If you fall into that inactive bucket, any attempt to upload Customer Match lists through the Google Ads API after April 1 will fail. Instead, Google wants you to move those workflows to the Data Manager API. The change applies only to Customer Match uploads β all other campaign management and reporting tasks should continue as normal in the Google Ads API.

Why Google says itβs doing this. Google positions the Data Manager API as a more modern, unified data ingestion solution across its platforms, with stronger security protocols. It also includes features not available in the Ads API, such as confidential matching and enhanced encryption β signaling a push to centralize and better secure audience data handling.
Why we care. If you or your developers havenβt touched Customer Match uploads in the last six months, this could catch you off guard. After April 1, 2026, the old workflow simply wonβt work β and errors will replace uploads.
The takeaway. Check whether your developer token has been used for Customer Match recently and plan a migration to the Data Manager API now, before Google flips the switch.
First spotted. This announcement was shared by Paid Search specialist Arpan Banerjee who shared the message he got from Google on LinkedIn.

ProtonVPN delivers secure, encrypted browsing with one of the most generous free plans in the VPN space. Built by the privacy-focused team behind Proton Mail, it offers unlimited data, a strict no-logs policy, and open-source apps across major platforms.
Privas AI adds a private conversation space to your website. Visitors can ask questions anonymously, understand your product, and decide before they talk to sales.
Instead of forcing users to click contact forms or WhatsApp buttons, your site can answer real questions in real time. Responses come from your actual website content, not generic AI, and you can step in when needed. Conversations stay private and are never used to train public models. Currently in guided private beta.
After having been in development for a very long time, Crimson Desert is almost upon us. Up until now, however, developer Pearl Abyss has only shown the game on PC, raising concerns over the console versions of the game and making more than a few suspect the developer was actually hiding console footage in a tragic repeat of the Cyberpunk 2077 situation back in 2022, forcing PR representatives to address the matter outright. Following the confirmation that Pearl Abyss is not hiding Crimson Desert console footage, details on the PlayStation 5 and PlayStation 5 Pro versions have been shared online, [β¦]
Read full article at https://wccftech.com/crimson-desert-4k-resolution-high-framerate-ps5-pro-pssr/

Apple didnβt add any kind of MagSafe charging on the MacBook Neo, meaning that the companyβs affordable 13-inch notebook only ships with two USB-C ports for topping up the battery and data transfer. Unfortunately, the $599 starting price will bring some compromises, with one of them being that the two ports wonβt deliver the same bandwidth. In short, if you want the fastest speeds when moving heaps of files, youβll have to avoid one of these ports. Fortunately, the MacBook Neo can be charged with both USB-C ports, itβs just one of them that will operate at a slower bandwidth [β¦]
Read full article at https://wccftech.com/macbook-neo-two-usb-c-ports-different-speeds/

Cyberpunk 2077 really did a number on how players look at major upcoming games. Yes, it's now an incredibly successful RPG, but its launch is an event that no one will soon forget, and CD Projekt RED's blatant misdirection regarding how the game ran on last-gen consoles has made players incredibly weary of any big game that doesn't put console gameplay footage front and center, and now, Crimson Desert, one of the most anticipated titles of 2026, seems to be feeling the ripple effects from that launch. To be clear, we have seen footage of Crimson Desert running on consoles, [β¦]
Read full article at https://wccftech.com/crimson-desert-pearl-abyss-not-hiding-console-gameplay-footage-ps5-xbox/

Although it was widely known that Remedy Entertainment has been working on the second entry in the Control series for a while, the reveal of Control Resonant was still quite a surprise for fans of the Finnish studio. Right from its very first showing during last year's The Game Awards show, it was clear how the second entry in the series would be a very different game from its predecessor, closer to a character action game than to the third-person shooter formula seen in Jesse Faden's venture into the Oldest House. The first few looks at Control Resonant's gameplay gave [β¦]
Read full article at https://wccftech.com/control-resonant-target-60-fps/

The rumors were consistent on the claims that the MacBook NeoΒ would ship with the A18 Pro, and now that Apple is officially done announcing its most affordable portable Mac ever, our curiosity compelled us to find out what shortcuts the company took on the internal hardware side of things. Sure enough, we found one major difference between the SoC running in the latest release and the one powering the previous-generation iPhone 16 Pro and iPhone 16 Pro Max. A18 Pro running in the MacBook Neo features a 5-core CPU, one less than the iPhone 16 Proβs silicon Where the iPhone [β¦]
Read full article at https://wccftech.com/macbook-neo-a18-pro-slower-than-iphone-16-pro-soc/

Now that Samsung has demonstrated its competence with the new Exynos 2600 chip, which has bested Qualcomm's Snapdragon 8 Elite Gen 5 chip in various benchmarks, especially the ones related to natural language understanding, object detection, and image classification, all the while maintaining an enviable thermal footprint, the South Korean behemoth is finally doubling down on its native silicon, with the sampling of the next-gen Exynos 2700 chip already underway. Samsung is already fabricating production samples for its next-gen Exynos 2700 chip, following the completion of the design process in late 2025 According to a South Korean publication, Samsung has [β¦]
Read full article at https://wccftech.com/samsung-accelerates-the-development-of-the-exynos-2700-chip-with-early-sampling/


Google has long been considered the gold standard for ad spend compared to social platforms. But scale doesnβt equal immunity. Click fraud remains a persistent risk, and the safety of your budget depends entirely on where your ads are running.
While Google Ads offers immense reach, its campaigns arenβt created equal. Some are significantly more exposed to malicious activity than others. To protect your margins, you must understand what constitutes click fraud, where it originates, and how to shield your campaigns.
Invalid clicks are interactions that lack legitimate consumer intent. Because they arenβt driven by real human interest, they skew performance data and drain budgets without any possibility of conversion. These clicks generally originate from four primary sources:
Dig deeper: Own your branded search: Building a competitive PPC defense
The average invalid click rate across Google Ads is 11.4%, a recent study by Fraud Blocker found. The figure is climbing.
That upward trend becomes clearer over time. In 2010, the average invalid click rate sat at 5.9%. By 2024, that number jumped to 12.3%. This doubling of fraud is likely driven by the increased sophistication of AI-powered bots and malware that can more effectively bypass basic security filters.

Invalid click rates fluctuate based on your campaign setup. Three key factors typically drive these numbers:
Not all Google Ads inventory carries the same level of risk. Hereβs how campaign types stack up from highest to lowest exposure.
Across the diverse range of industries my clients serve, Iβve identified specific patterns in how fraud manifests across different sectors. As a result, the best prescription is proactive. Address these vulnerabilities by shifting from broad, automated settings to a more refined, high-intent strategy.
The following table highlights the specific patterns we monitor to lower invalid click rates:
| Factor | Higher risk (Aggressive) | Lower risk (Strict) |
| Location | Global or βPresence or Interestβ | βPresence Onlyβ (User is physically there) |
| Keywords | Broad match / Generic terms | Exact match / Long-tail phrases |
| Networks | Including βSearch Partnersβ and βDisplayβ | Google Search Network only |
| Exclusions | No negative keywords or placement lists | Robust negative lists and app exclusions |
| Scheduling | 24/7 (Bots often spike at night) | Custom schedules aligned with business hours |
Here are proactive steps you can take to reduce your exposure to fraud.
Dig deeper: PPC in the age of zero-click search: How to stay profitable
Google is far from a uniform entity. Itβs a diverse ecosystem of distinct environments where risk levels can vary by as much as 400%.
Prioritizing high-quality traffic results in superior data integrity, more precise optimization, and reduced acquisition costs. In todayβs market, the strategic structure of your campaigns is just as vital to your success as the size of your budget.


One of the most profitable Google Ads targeting tactics is retargeting: showing ads to people who are already familiar with your business. But if you still think that βretargetingβ means a Display campaign chasing users around the web with banner ads, youβre missing out on how βYour data segmentsβ actually function today.
Letβs explore how you can leverage your proprietary audience data in new ways, and what mistakes to avoid in 2026 and beyond.
Retargeting means showing ads to people who are already familiar with your business. Google uses the euphemistic name βYour data segmentsβ to refer to all the retargeting lists in your account.
A variety of different retargeting methods are available in Google Ads. They mirror what youβll find on other ad platforms like Meta, LinkedIn, or TikTok. I find it helpful to group them into four categories:
Many practitioners overlook this detail: your data segments arenβt just about ad targeting.
Even if you donβt have a single retargeting campaign running, the mere existence of these lists in your account provides a vital signal for Smart Bidding and Optimized Targeting.
For example, when you upload a customer list, youβre telling Google, βThese are the people who actually buy from me.β Even if you never add that list to your audience signal in Performance Max, Google will still use it to understand likely converters and adjust bidding/targeting accordingly.
Similarly, letβs say you only run Search and Shopping campaigns, and you use Target ROAS bidding. When Google is trying to set the right bid for the right user at the right time, their presence (or lack thereof) on a βyour data segmentβ list is one of many signals incorporated into that bidding calculation.
Different campaign types handle audience data differently. Itβs important to know the distinction so you can plan your targeting strategy accordingly.
If youβre new to retargeting, I find Demand Gen the best place to start. Itβs built for visual storytelling and works well with the Google Engaged Audience or basic website visitor lists.
If you have some experience with retargeting campaigns, you might want to try New Customer Acquisition or Customer Retention mode in PMax or Shopping, as these are powered by Your data segments.
Over-segmenting. I know it can be tempting to create 50 different lists: βPeople who visited the cart on a Tuesday,β or βPeople who looked at three pages but didnβt click the βAboutβ section.β
Unless youβre spending six figures or more every month, this level of granularity doesnβt help, and may actually hurt your campaigns. Googleβs AI needs data density to learn. When you slice your audience into tiny slivers, you donβt have enough βmatched recordsβ for the system to optimize.
Upload your unique data to Google Ads, keep your strategy simple, and let the bidding algorithms do the heavy lifting in driving returning customers for you.
This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it β all in a quick 3-minute read.
Microsoft isnβt launching Windows 12 this year; they would be foolish to try it A new report from PC World says Microsoft plans to release Windows 12 this year. This new OS will reportedly feature a modular βCorePCβ architecture and require NPU hardware for its deep AI integrations. Since then, Windows Central has rubbished this [β¦]
The post No, Windows 12 isnβt coming this year appeared first on OC3D.
Tork governs every tool call your AI agents make. It sits between agents and external systems as an MCP gateway and policy engine, enforcing allowlists, blocking dangerous flags, and routing high-risk actions to human reviewers. You can detect and redact over 50 PII types, pause agents with a kill switch, and generate cryptographic, tamper-evident audit receipts. Integrate with LangChain, CrewAI, AutoGen, OpenAI, and MCP using SDKs for major languages to add governance without rewrites.
After Bloomberg reporter Jason Schreier initially pointed to PlayStation, "backing away from putting their traditional single-player games on PC" on a podcast, he's now published a longer report, which digs a bit deeper into the situation. Essentially, it seems Sony has decided to pivot its strategy of putting its more traditional single-player games on PC, shifting back to keeping them as PlayStation console exclusives. Now, both Ghost of Yotei and Saros won't make the jump to PC as a result. That means PC players who loved Returnal or Ghost of Tsushima on PC will have to buy a PS5 if [β¦]
Read full article at https://wccftech.com/ghost-of-yotei-saros-wont-be-coming-to-pc-sony-playstation/

Looks like the new APU series will be better for mini PCs, as the number of PCIe lanes has been noticeably reduced. Ryzen AI 400 Desktop CPUs Only Bring 10 or 12 PCIe 4.0 Lanes, Limiting Lane Width for GPUs/NVMe SSDs Two days ago, AMD finally debuted its first Zen 5-based desktop APU series for the AM5 platform, giving users the flexibility to build APU-based gaming builds. AMD hasn't been so aggressive in the desktop segment when it comes to delivering peak graphical performance through integrated graphics. Still, the debut of Ryzen AI 400 and Ryzen AI Pro 400 series [β¦]
Read full article at https://wccftech.com/amd-nerfs-ryzen-ai-400-desktop-cpus-with-just-10-12-usable-pcie-lanes/

The PC market has been contracting since the start of this year, and in particular, the GPU segment has taken a hit, with both NVIDIA and Intel seeing a decline in market share. NVIDIA & Intel Witness a Decline in GPU Market Share, Yet AMD Somehow Pulls Off an Increase The GPU industry has seen difficult times before, with one prominent example being the crypto-mining era, when gamers couldn't get their hands on units at all. Back then, the supply was diverted to professional demand and did not reach consumers at all; today, the situation is much grimmer. According to [β¦]
Read full article at https://wccftech.com/pc-gpu-market-enters-rough-sailing-as-shipments-drop/

The most affordable notebook has been added to Appleβs lineup today, and it is called the MacBook Neo. After years of rumors, the Cupertino firm is coming to chip away at the market share of its competitors with a highly affordable offering that comes in four attractive paint jobs and is fueled by the previous-generation A18 Pro chipset. Here are more details you'll love to check out. The $599 price of the MacBook Neo is expected to undercut the competition The affordable portable Mac sports a 13-inch Liquid Retina display with a 2,408 x 1,506 resolution and 500 nits brightness. [β¦]
Read full article at https://wccftech.com/apple-official-announces-macbook-neo/

Satellite-based communication and the internet are the next big tech frontier for smartphones, and MediaTek is trying to position itself as a tech leader by formalizing a high-stakes partnership with SpaceX to bring native Starlink compatibility for emergency messages to the M90 modem. MediaTek and SpaceX partner to bring native Starlink compatibility for emergency messages to the M90 modem MediaTek has now announced that it is collaborating with SpaceX's Starlink to "support wireless emergency alert messages via satellite communication," allowing global smartphone users "to receive alerts from the Commercial Mobile Alert System (CMAS), Wireless Emergency Alerts (WEA) framework, and the [β¦]
Read full article at https://wccftech.com/mediatek-partners-with-spacex-to-bring-starlink-support-to-its-m90-modem/

The Financial Times has published a paywalled report revealing that the US government is presently evaluating whether to compel the giant Chinese gaming publisher Tencent to divest from US gaming companies for national security purposes. Top officials have held internal meetings to assess βwhether Tencent's many investments can be allowed to continue, since they give them access to data on millions of American gamers. A cabinet-level meeting to discuss the matter was scheduled for today, but was postponed due to scheduling conflicts. US President Donald Trump is preparing to meet Chinese President Xi Jinping in China in April, so presumably, [β¦]
Read full article at https://wccftech.com/us-government-tencent-divest-gaming-companies-national-security/

Set a course for your destination with the new Star Trek experience on Waze! In celebration of βStarfleet Academy,β the new series streaming on Paramount+, The Doctor isβ¦
We design Geminiβs safeguards in consultation with medical professionals and mental-health professionals. 
Lately, Iβve been spending most of my day inside Cursor running Claude Code. Iβm not a developer. I run a digital marketing agency. But Claude Code within Cursor has become the fastest way for me to handle many tasks I want to do, including pulling and analyzing data from Google Search Console, GA4, and Google Ads.
The setup takes about an hour. After that, you can ask things like βwhich keywords am I paying for that I already rank for organically?β and get an answer in seconds instead of spending an afternoon with spreadsheets. (I wouldnβt have been the one spending an afternoon with spreadsheets anyway, but now nobody has to.)
Hereβs the step-by-step process I developed while analyzing data for our agency clients. If this looks too technical, paste the URL of this article into Claude and ask it to walk you through it step by step.
What you end up with is a project directory where Claude Code has access to Python scripts that pull live data from your Google APIs. You fetch the data, it lands in JSON files, and then you just talk to it.
No dashboards to build. No Looker Studio templates to maintain. Youβre basically giving Claude Code the same data your team would look at, and letting it do the cross-referencing.
seo-project/
βββ config.json # Client details + API property IDs
βββ fetchers/
β βββ fetch_gsc.py # Google Search Console
β βββ fetch_ga4.py # Google Analytics 4
β βββ fetch_ads.py # Google Ads search terms
β βββ fetch_ai_visibility.py # AI Search data
βββ data/
β βββ gsc/ # Query + page performance
β βββ ga4/ # Traffic by channel, top pages
β βββ ads/ # Search terms, spend, conversions
β βββ ai-visibility/ # AI citation data
βββ reports/ # Generated analysis
Everything runs through a Google Cloud service account. One service account covers both GSC and GA4, which is nice. Google Ads needs its own OAuth setup, which is less nice but manageable.

The service account email looks like your-project@your-project-id.iam.gserviceaccount.com. Youβll add this email address to each clientβs GSC and GA4 properties, same way youβd add any team member.
For agencies: one service account works across all clients. Add it to each property, update a config file with the property IDs, and youβre set.
Google Ads is different. You need:
The developer token requires an application. For agency use, describe it as βautomated reporting for marketing clients.β Approval usually takes 24-48 hours.
If youβre using a Manager Account (MCC), one developer token and one refresh token cover all sub-accounts. You just change the customer ID per client.
If you donβt have API access or MCC, maybe itβs a new client and youβre still getting set up, you can skip the API entirely. Download 90 days of keyword and search terms data as CSVs from the Google Ads UI, drop them in your data directory, and Claude Code will work with those just as well. Thatβs how we handle clients who arenβt in our MCC yet.
All the examples below assume youβre working in the terminal on a Mac or Linux machine. If youβre on Windows, the easiest path is Windows Subsystem for Linux (WSL).
pip install google-api-python-client google-auth google-analytics-data google-ads
Each fetcher is a short Python script that authenticates, pulls data, and saves JSON. I didnβt write these from scratch. I described what I wanted to Claude Code and it wrote them.
One thing that genuinely surprised me: I never had to read the API documentation. Not for GSC, GA4, or Google Ads.Β
Iβd say something like βI want to pull the top 1,000 queries from Search Console for the last 90 days,β and Claude Code would figure out the authentication, endpoints, and query parameters. It already knows these APIs. You just tell it what data you want.
Hereβs what the scripts look like.
from google.oauth2 import service_account
from googleapiclient.discovery import build
SCOPES = ['https://www.googleapis.com/auth/webmasters.readonly']
def get_gsc_service():
credentials = service_account.Credentials.from_service_account_file(
'service-account-key.json', scopes=SCOPES
)
return build('searchconsole', 'v1', credentials=credentials)
def fetch_queries(service, site_url, start_date, end_date):
response = service.searchanalytics().query(
siteUrl=site_url,
body={
'startDate': start_date,
'endDate': end_date,
'dimensions': ['query'],
'rowLimit': 1000
}
).execute()
return response.get('rows', [])
You get back queries with clicks, impressions, CTR, and average position. Save it as JSON.
from google.analytics.data_v1beta import BetaAnalyticsDataClient
from google.analytics.data_v1beta.types import (
RunReportRequest, DateRange, Metric, Dimension
)
def get_ga4_client():
credentials = service_account.Credentials.from_service_account_file(
'service-account-key.json',
scopes=['https://www.googleapis.com/auth/analytics.readonly']
)
return BetaAnalyticsDataClient(credentials=credentials)
def fetch_traffic_by_channel(client, property_id, start_date, end_date):
request = RunReportRequest(
property=f"properties/{property_id}",
date_ranges=[DateRange(start_date=start_date, end_date=end_date)],
dimensions=[Dimension(name="sessionDefaultChannelGroup")],
metrics=[
Metric(name="sessions"),
Metric(name="totalUsers"),
Metric(name="bounceRate"),
]
)
return client.run_report(request)
Google Ads uses something called Google Ads Query Language (GAQL). If youβve ever written a SQL query, this will look familiar. If you havenβt, donβt worry, Claude Code will write it for you:
from google.ads.googleads.client import GoogleAdsClient
client = GoogleAdsClient.load_from_storage("google-ads.yaml")
ga_service = client.get_service("GoogleAdsService")
query = """
SELECT
search_term_view.search_term,
metrics.impressions,
metrics.clicks,
metrics.cost_micros,
metrics.conversions
FROM search_term_view
WHERE segments.date DURING LAST_30_DAYS
ORDER BY metrics.impressions DESC
"""
response = ga_service.search(customer_id="1234567890", query=query)
This pulls the same data as the Search Terms report youβd download from the Google Ads UI: impressions, clicks, cost, conversions, match type, campaign, and ad group.
One JSON file per client. Nothing fancy, just the property IDs and some context:
{
"name": "Client Name",
"domain": "example.com",
"gsc_property": "https://www.example.com/",
"ga4_property_id": "319491912",
"google_ads_customer_id": "9270739126",
"industry": "Higher Education",
"competitors": [
"https://competitor1.com/",
"https://competitor2.com/"
]
}
So now youβve got JSON files from GSC, GA4, and Ads sitting in your project directory. Claude Code can read all of them at once and answer questions that would normally mean a lot of tab-switching and VLOOKUP work.
The single most valuable question Iβve found:

When I ran this for a higher education client, it identified:
That analysis took about 90 seconds. The equivalent manual process (downloading CSVs from GSC and Ads, VLOOKUPing across them, categorizing the overlaps) takes most of an afternoon.
Once you have GSC + GA4 + Ads data loaded:
Claude Code isnβt doing anything a human couldnβt do with spreadsheets. Itβs doing it in seconds, and you can follow up with another question without rebuilding the whole analysis from scratch.
Traditional SERP positions arenβt the whole picture anymore. Between Googleβs AI Overviews, AI Mode, Copilot, ChatGPT, and Perplexity, you need to know whether AI systems are citing your content.Β
This is especially true in verticals like higher education, where prospective students increasingly start their research in AI search tools.
Tools like Scrunch, Semrushβs AI Visibility toolkit, or Otterly.ai will track your brandβs presence across ChatGPT, Perplexity, Gemini, Google AI Overviews, and Copilot.Β
Export the data as CSV or JSON and drop it in your data directory. Claude Code can then cross-reference AI citations against your GSC and Ads data.

When I did this for our own site, we discovered two blog posts competing for the same AI citations on GEO-related queries.
One had 12 times as many Copilot citations as the other, despite both targeting similar intent. That led to a consolidation decision we wouldnβt have made based solely on traditional rank data. This kind of AI search cannibalization is something most SEO teams arenβt yet checking for.
You donβt need an enterprise tool to start. There are several APIs that let you pull AI search data directly, and the costs are lower than youβd think.
DataForSEO AI Overview API: The most accessible option. Pay-as-you-go at about $0.01 per query, with a $50 minimum deposit. You send a keyword, and it returns the full AI Overview content from Google SERPs, including which URLs are cited. It also has a separate LLM Mentions API that tracks how LLMs reference brands across platforms.
# DataForSEO AI Overview β simplified example
payload = [{
"keyword": "best higher education marketing agencies",
"location_code": 2840, # US
"language_code": "en"
}]
response = requests.post(
"https://api.dataforseo.com/v3/serp/google/ai_overview/live/advanced",
headers=auth_headers,
json=payload
)
# Returns: AI Overview text, cited URLs, references
SerpApi: Starts at $75/month for 5,000 searches. Returns structured JSON for the full Google SERP, including AI Overviews. Good documentation, Python client library, and a free tier for testing.
SearchAPI.io: Similar to SerpApi, starts at $40/month. Also offers a separate Google AI Mode API that captures AI-generated answers with citations.
Bright Data SERP API: Pay-as-you-go starting around $1.80 per 1,000 requests. Set brd_ai_overview=2 to increase the likelihood of capturing AI Overviews. Also has an MCP server if you want tighter agent integration.
Bing Webmaster Tools: Free, and the only first-party AI citation data available from any major platform right now. Shows how often your content appears as a source in Copilot and Bing AI responses, with page-level data and the βgrounding queriesβ that triggered citations. No API yet (Microsoft says itβs on the backlog), but you can export CSVs.
DIY: Direct LLM API Calls: The cheapest approach for small-scale monitoring. Write a Python script that sends a consistent set of prompts to the OpenAI, Anthropic, and Perplexity APIs, then parses responses for brand mentions. Perplexityβs Sonar API is especially useful here because it includes web citations in responses, and citation tokens are free. Total cost: under $20/month for a modest prompt library.
The general pattern: Pick one SERP API for Google AI Overview data, use Bing Webmaster Tools (itβs free), and supplement with direct LLM API calls or a dedicated tracker if budget allows.
So what does this actually look like on a Tuesday morning?
Setup: Once per client, ~15 minutes
Monthly data pull: ~5 minutes
python3 run_fetch.py --sources gsc,ga4,ads
Analysis (as needed): Open Claude Code in the project directory and ask questions. The data is right there.
Output: Claude Code generates a markdown report. When I need something client-facing, I push it to Google Docs using a separate tool I built called google-docs-forge. It converts markdown into a properly formatted Google Doc, so the output doesnβt look like it came from a terminal.

The whole process takes about 35 minutes for a new client: setup, fetch, analysis. Monthly refreshes take about 20 minutes, including analysis time. Compare that to the manual alternative of downloading CSVs from three different platforms, cross-referencing in spreadsheets, and writing up findings.
I donβt want to oversell this. Claude Code is reading your data and finding patterns across sources faster than you can manually. Itβs not telling you what to do about those patterns. You still need someone who understands the clientβs business, their competitive situation, and what theyβre actually trying to accomplish. The tool finds the interesting data. The strategist decides what to do with it.
You also need to verify what it gives you. LLMs can hallucinate, and that includes data analysis. Iβve seen Claude Code confidently report a number that didnβt match the JSON file. Itβs rare, but it happens.Β
Treat the output like youβd treat work from a new analyst: trust but verify, especially before anything goes to a client. Spot-check the numbers against the source data. If something looks too clean or too dramatic, go look at the raw file.
It also doesnβt replace your existing platforms. If you need historical trend data, automated alerts, or a client-facing dashboard, you still want a Semrush or an Ahrefs. What this gives you is the ability to ask ad hoc questions across multiple data sources, which none of those platforms does well on their own.
And the GEO/AI visibility tracking space is still immature. The data from AI citation tools is directionally useful. Wind sock, not GPS. Google doesnβt publish AI Overview or AI Mode citation data through any official API, so every third-party tool is approximating. Bingβs Copilot data is the most reliable because itβs first-party, but it only covers the Microsoft ecosystem.
If you want to give this a shot:
Each layer builds on the last. You donβt need all four to get value. The GSC + GA4 combination alone surfaces insights that take hours to find manually.

Chrome 146 has introduced an early preview of WebMCP behind a flag. WebMCP (Web Model Context Protocol) is a proposed web standard that exposes structured tools on websites, showing AI agents exactly what actions they can take and how to execute them.
Hereβs some context around what that actually means.
The internet was originally built for humans. We designed buttons, dropdowns, and forms for people to read, understand, and use. But now thereβs a new type of user emerging: AI agents. Soon, theyβll be able to complete registrations, buy tickets, and take any action needed to complete a goal on a website.
Right now, AI agents face a major challenge. They must crawl websites and reverse-engineer how everything works. For example, to book a flight, an agent needs to identify the right input fields, guess the correct data format, and hope nothing breaks in the process. Itβs inefficient.
The WebMCP standard will solve this issue by exposing the structure of these tools so AI agents can understand and perform better.
Letβs say you need to book a flight.
Without WebMCP: An AI agent would crawl the page looking for a button that would say something like βBook a Flightβ or βSearch Flights.β The agent reads the screen, guesses which fields need what information, and hopes the form accepts its input.
With WebMCP: Instead of thinking βI need to find a βBook a Flightβ button,β the agent thinks βI need to call the bookFlight() function with clear parameters (date, origin/destination, passengers) and receive a structured result. The agent doesnβt search for visual elements. It calls a function, just like developers do when working with APIs.
WebMCP provides JavaScript APIs and HTML form annotations so AI agents know exactly how to interact with the pageβs tools. It works using basically three steps:
Your website exposes a list of actions, each one describing what it does, what inputs it accepts, what outputs it returns, and what permissions it requires.
The SEO toolkit you know, plus the AI visibility data you need.
AI agents are quickly becoming part of our daily workflows. Soon we wonβt book our own flights, fill out forms, or publish content β weβll ask an AI to do it for us.
But right now, AI agents struggle to interact with websites reliably. They currently use two imperfect approaches:
Automation (fragile and unreliable): In this approach, the AI agent reads the screen, clicks buttons, and types into fields, just like a human would. However, websites are constantly updated. Button colors change. Field names change. Classes change. A/B tests create different versions of the same page. What worked yesterday may not work today.
APIs (limited availability): APIs provide a direct, structured way for agents to interact with websites. The problem is that most websites donβt have public APIs, and those that do are often missing key features or data thatβs available through the user interface.
WebMCP fills the gap between those two approaches. It lets websites expose actions in a way that matches how the web actually works. Think of it as making your existing web interface readable by AI agents β without the fragility of UI automation or the overhead of maintaining a separate API.

Just as websites optimized for search engines in the 2000s, WebMCP represents the next evolution: optimization for AI agents. Early adopters who implement WebMCP could gain a competitive advantage as AI-powered search and commerce become mainstream.
But this isnβt just about SEO anymore. Itβs about seizing a broader growth opportunity. SEO, AEO (AI engine optimization), and agentic optimization are all knowledge areas with one common goal: improving revenue. WebMCP opens the door to being not just discoverable, but directly actionable by the agents your future customers will use.
To make this more concrete, here are some scenarios where WebMCP changes the game:
WebMCP gives developers two ways to make their websites agent-ready:

The Imperative API lets developers define tools programmatically through a new browser interface called navigator.modelContext. You register a tool by giving it a name, a description, an input schema, and an execute function.
Hereβs a simplified example of an ecommerce product search tool:

The agent sees the tool, understands what it does, knows what input it needs, and can call it directly.
Developers can register tools one at a time with registerTool(), replace the full set with provideContext() (useful when your appβs state changes significantly), or remove them with unregisterTool() and clearContext().
The Declarative API transforms standard HTML forms into agent-compatible tools by adding a few HTML attributes.
Hereβs a simplified example of a restaurant reservation form:

By adding toolname and tooldescription to a form, the browser automatically translates its fields into a structured schema that AI agents can interpret. When an agent calls the tool, the browser populates the fields and, if toolautosubmit is set, it submits the form automatically.
The big takeaway: Existing websites with standard HTML forms can become agent-compatible with minimal code changes.
Googleβs early preview documentation includes some practical guidance on designing good WebMCP tools. A few highlights worth noting:
WebMCP is currently available as an early preview behind a feature flag in Chrome 146. Itβs not production-ready yet, but developers and curious teams can already experiment with it.
Once the flag is enabled, you can install the Model Context Tool Inspector Extension to see WebMCP in action. The extension lets you inspect registered tools on any page, execute them manually with custom parameters, or test them with an AI agent using Gemini API support. Google also has a live travel demo where you can see the full flow, from discovering tools to invoking them with natural language.
In the same way that mobile-first design changed how we build websites, agent-ready design could define the next generation of web applications.
That said, WebMCP is still in early preview. The final version will likely change. The Chrome team is actively discussing rolling back parts of what theyβve been building with the embedded LLM API (like summarization and other features). So what weβre seeing now is a starting point, not the finished product.
WebMCP is simply the next chapter in AI optimization. While aiming for discoverability and citation is still essential, WebMCP opens up a new opportunity for brands β making entire web experiences and functionality accessible to AI agents. Itβs not just about being found or cited. Itβs about being usable by the next generation of web users.
Track, optimize, and win in Google and AI search from one platform.
Start experimenting with WebMCP, but donβt bet your roadmap on it yet. The standard is evolving, and early adopters will have an advantage, but only if they stay flexible as the standard matures.
The websites that win in an agent-driven web will be those that make it easy for AI to complete tasks, not just find information.
This article was originally published on LOCOMOTIVE (as WebMCP: The Standard That Lets AI Agents Call Website Functions Directly) and is republished with permission.

Ashes of the Singularity 2βs PC demo has been extended until March 16th Stardock and Oxide Gamesβ Ashes of the Singularity II demo was one of the most-played demos from Steam Next Fest, and to celebrate, the demoβs availability has been extended until March 16th. Originally, this demo was intended to be available only until [β¦]
The post Stardock extends Ashes of the Singularity II demo period after Steam Next Fest success appeared first on OC3D.
EOB Extractor parses Explanation of Benefits documents into structured JSON you can trust. Upload PDFs or images to see billed, allowed, insurance paid, and patient responsibility with plain-language CPT descriptions. It supports EOBs from major insurers and offers an API to integrate extraction into your workflow. Start free and buy credits as needed to automate claim review and billing reconciliation.
Years after its release, Fallout: New Vegas continues to be one of the most popular entries in the series, and demand for a remaster continues to be high. Late last month, it seemed like fans' wishes would come true in the near future, as development studio Iron Galaxy seemed to tease that it was working on this highly requested project. However, the developer has now confirmed that there was nothing to see in the image they shared a few days back. On Bluesky, the development studio behind multiple ports and remasters such as Tony Hawk's Pro Skater 3 + 4, [β¦]
Read full article at https://wccftech.com/fallout-new-vegas-remaster-iron-galaxy-not-working/

New price hikes are reportedly being implemented for the ASUS RTX 50 series stack, except for one SKU, as well as the RDNA 4 series. ASUS Reportedly Increases RTX 50 Series Prices by up to $72 for Flagship Models; RTX 5060 Ti 8 GB Remains Unchanged As Well As RDNA 4 GPUs ASUS has once again increased the prices of its products, particularly its GPUs as per the latest report by Channel Gate. Almost every GPU vendor has implemented new price hikes this year, and they continue to revise the prices, citing higher memory prices. As Channel Gate reports, ASUS [β¦]
Read full article at https://wccftech.com/asus-reportedly-implements-a-price-hike-for-rtx-50-series-in-china-but-no-changes-for-amd-gpus/

A gaming laptop purchase only makes sense when you have AAA masterpieces to immerse yourself in. It just so happens that Capcomβs Resident Evil RequiemΒ has shot up in popularity because it is one of the best experiences youβll ever go through. Unfortunately, it is a demanding title, but not if you own the right hardware; the problem is, where should you begin? We have a suggestion, and it is honestly one of the best gaming laptop deals that weβve come across,Β and not just because Amazon has taken $262 off the Lenovo Legion Pro 7iΒ and is listed for $1,887,98. Weβre recommending [β¦]
Read full article at https://wccftech.com/lenovo-legion-pro-7i-rtx-5070-ti-free-resident-evil-requiem-copy-bundle-available-for-262-off-on-amazon/

Woven is a personal trainer for your relationship

Google posted a new help document on βThings to know about Googleβs web crawling.β While many of those βthings to knowβ are already known, Google felt it would be a good idea to make this document in order to provide βbasic educational information about crawling to better highlight various resources about crawling that are available to site owners.β
The document has 9 items posted in it right now including:
Frequent crawling is a good sign! Google wrote,
Other items. Here is the full list, but make sure to check out the help document to read it all. None of it is new but it is a helpful refresher:
Why we care. Crawling is a fundamental requirement for SEO and being found in Google Search and other Google surfaces. This help document might help you quickly understand how Google crawling works and what you can aim to do to improve your siteβs crawlability.
Track, optimize, and win in Google and AI search from one platform.

EA job listing confirms Javelin Anticheat ARM support efforts EA has posted a job listing for a new βSenior Anti-Cheat Engineerβ who specialises in ARM64 hardware. EA wants to add ARM64 support to its Javelin Anticheat software, the anti-cheating solution that EA uses in Battlefield and other competitive games. This job listing comes at a [β¦]
The post EA Job Listing details planned ARM support for Javelin Anticheat appeared first on OC3D.
Highguard to shut down five weeks after launch Itβs official, Wildlight Entertainment has confirmed that Highguard is getting shut down. The game will go offline on March 12th, a mere five weeks after the gameβs release. The game failed βto build a sustainable playerbase,β and will not be supported long-term. In aΒ statement on social media, [β¦]
The post Highguard is getting shut down appeared first on OC3D.
FluxCopy is the AI finishing engine for writers, builders, and creators stuck on the final polish. While chat AIs generate from scratch, FluxCopy excels at the last 10%: paste your solid draft, choose Light Polish (free) or Final Pass (pro), and get clear, confident, publish-ready copy in seconds with no signup, no complex prompts, and no endless tweaks. Perfect for freelancers emailing clients, indie hackers crafting landing pages, consultants writing proposals, or anyone tired of almost good text. Enjoy 5 free refinements per day (500 chars, watermarked) and stop rewritingβrun the Final Pass and ship faster.
This morning, CAPCOM revealed in a press release that Resident Evil Requiem has already surpassed five million units sold globally in just six days since its February 27 launch. We already had an inkling of the game's excellent commercial performance based on how easily it smashed prior franchise concurrency records on Steam. User reviews on Valve's platform are also Overwhelmingly Positive (96% right now), and the game currently holds the best user score on Metacritic, tied with Clair Obscur: Expedition 33. Weirdly enough, though, CAPCOM does not boast of Resident Evil Requiem being the fastest-selling franchise entry, even though it [β¦]
Read full article at https://wccftech.com/resident-evil-requiem-surpassed-5-million-units-sold/

Apple has supercharged its non-Pro tablet lineup by bringing the M4 to the iPad Air, but donβt assume that the chipset will perform similarly to the silicon found in the more expensive iPad Pro. In fact, the differences are quite pronounced, and according to the latest single-core and multi-core benchmark comparison, the M4 iPad Air can lose more than 20 percent performance because Apple decided to equip it with a binned SoC. Unsurprisingly, the M4 iPad Airβs 8-core CPU shows less of a performance difference with the 9-core M4 iPad Pro Before Appleβs newest slates go up for pre-order, Geekbench [β¦]
Read full article at https://wccftech.com/m4-ipad-air-binned-chipset-causes-20-percent-multi-core-performance-loss-vs-m4-ipad-pro/

Working with Google, Taiwan uses 20 years of health data and Gemini to bring predictive diabetes care to millions in its population-wide health system. 