Minisforum's new flagship NAS comes with OpenClaw pre-installed β Strix Halo-powered N5 Max can run a local AI LLM
Eight in ten Performance Max advertisers are receiving connected TV (CTV) impressions via YouTube, as reported by Smarter Ecoommerceβs Mike Ryan. Google has expanded the channelβs reach over the past year β and the trajectory is only accelerating.

The timeline of how we got here:
Why we care. CTV is no longer a specialist buy. If youβre running PMax, youβre almost certainly already on the big screen β and Google has been steadily upgrading what that means for commerce. Google is automatically turning your product feed images into TV ads and allocating budget to CTV impressions, with no action required on your part.
Without actively checking your channel performance breakdown, you have no visibility into where your spend is going or whether auto-generated creative is actually fit for a 65-inch screen.
What advertisers should do right now:
The big picture: YouTube CEO Neal Mohan confirmed that TV has surpassed mobile as the primary device for YouTube viewing in the U.S. by watch time, and YouTube has been the #1 streaming platform in the U.S. for two consecutive years. PMax advertisers are already there β the question is whether theyβre managing it intentionally or just along for the ride.
Dig Deeper. YouTube Viewing on TV Now Surpasses Mobile, Desktop in U.S.
Yahoo today introduced MyScout, a customizable homepage inside Yahoo Scout, its beta AI answer engine.
How MyScout works. Logged-in users can customize the homepage with tiles that pull information from Yahoo properties (e.g., Mail, News, Sports, Finance, Games). Examples include:
Users can add, remove, reorder, or create tiles based on topics or queries they want to follow.
New publisher features. Yahoo says Scout supports the open web by linking users directly to original sources used in its AI answers. To support that goal, Yahoo News is also launching new publisher features designed to help you grow recurring audiences on its platform:
Availability: Yahoo Scout β including MyScout β is available in beta for U.S. users at Scout.com and through the Yahoo Search app on iOS and Android.
Yahooβs announcement. Yahoo Introduces MyScout, the First Personalized Homepage for AI Answers
QuickWise turns your documents, website, and FAQs into a data-grounded AI support chatbot that answers customer questions instantly. It uses a RAG pipeline with source citations, a corrections system to override mistakes, and an FAQ priority layer to enforce authoritative answers. QuickWise also includes built-in support ticketing with automatic bot-to-human handoff, analytics, an embeddable widget, a documentation portal, and API/webhooks. Deploy in minutes, manage multiple chatbots, and track deflection and satisfaction from one dashboard.
Most people have no idea what they're actually worth. Money is scattered across bank accounts, investments, property, and debt, with no single view tying it together. Finsory fixes that with one elegant dashboard and six dedicated modules: real-time net worth tracking, a monthly ledger, property and vehicle valuations, investment portfolio performance, and private lending management. Every asset and liability visible in one place. You wouldn't run a business without knowing your numbers, so why run your financial life without clarity? Finsory gives you the same insight that CEOs demand, applied to your personal wealth.
Fort tracks strength for people who care about longevity.
Googleβs branded queries filter in Search Console is now available to all eligible sites. Googleβs new branded queries filter, announced Nov. 20, lets you separate branded and non-branded search traffic in the Performance report.
Why we care. Separating branded and non-branded queries has long required manual regex filters or keyword lists. This update gives you native segmentation in Search Console, making it easier to measure brand demand versus discovery traffic.
Googleβs announcement. Google confirmed the broader availability in a LinkedIn post today:
The details. The branded queries filter appears in the Search results Performance report. It lets you segment queries into two groups:
When applied, Search Console limits metrics β impressions, clicks, CTR, and average position β to the selected group. The filter works across all search types (Web, Image, Video, News) in the report.
Insights report. Google also added a new card to the Search ConsoleΒ InsightsΒ report that shows a click breakdown between branded and non-branded traffic.
Googleβs brand classification. Google uses an internal AI-assisted system to determine whether queries are branded. The system can recognize:
Some queries may be misclassified due to the contextual nature of brand detection, Google said. The filter is strictly a reporting feature and doesnβt affect search rankings.
What to watch. Todayβs announcement indicates it has reached all eligible sites, though some properties may still not qualify due to query and impression volume requirements.
We joke every time we hear Googleβs John Mueller answer a question with βit depends.β But actually, itβs true.
There are few definitive answers or universally established facts in SEO. Do meta titles matter? Yes. Is internal linking a good practice? Yes. Is duplicate content bad for SEO? Yes.
But if I tried to make a list of SEO questions with a single, clear, absolute answer, it wouldnβt be long.
Thatβs the real challenge: we operate in an industry where things almost always depend on context, intent, competition, your websiteβs situation, and the platform itself.
Yet over and over, we see questions framed as if there must be one right answer. SEO tips are often shared as universal truths β one-size-fits-all for websites, industries, and business models.
My purpose here is simple: to shift that mindset. Especially if you share SEO advice publicly, letβs move away from βthis is the only wayβ and toward βthis is one way, depending on your situation.β
The idea for this article came to me when I saw Mueller respond to a Reddit thread about the importance of schema markup. He replied, βThis question will stick with us for the next year and longer, and the short answer is yes, no, and it dependsβ¦β
And heβs absolutely right.
Schema isnβt a special case of βit depends.β Itβs just a familiar one. The same logic applies across almost every debate in our industry, including arguably the biggest one right now.
This has become one of the most debated topics going into 2025 and 2026. Is SEO the same as generative engine optimization (GEO)?
Well, it depends. If weβre talking about core tactics β content quality, structured information, entity relationships, internal linking, bot accessibility, and content discoverability β then yes, there is significant overlap.
But if weβre talking about platforms and how they operate, then no. SEO traditionally optimizes for search engines like Google. GEO aims to influence visibility within generative systems like those developed by OpenAI and others.
The mechanics differ:
That doesnβt mean one replaces the other. It means the context changes.
So, do you still think GEO is the same as SEO? (Yes, no, and it depends are all correct answers.)
This was another Mueller moment on Reddit, where he responded with: βI think Iβm trying to say βit dependsβ?β
Is domain age a ranking factor? Not directly.
Can a newer website outrank an older one? Generally yes. Specifically, it depends on a lot of factors:
There are too many moving parts to give a universal answer, and thatβs exactly the point.
While itβs tempting to say yes, the standard answer is no. 404s donβt automatically hurt your websiteβs performance in search.
Fixing 404s is on every technical SEO checklist. Itβs a good practice and definitely reduces your websiteβs technical debt. They donβt naturally hurt your performance in search because Google understands that pages are retired naturally.
Products go out of stock. Articles get removed. Content evolves. A 404 status code, by itself, is not a penalty trigger.
Unless your website creates a large number of 404s in a short period, which can happen during website migrations, for example. If a significant percentage of previously indexed URLs start returning 404s, that can absolutely impact your search visibility for the whole website. Especially if the number of 404s is a noticeable percentage of your websiteβs pages.
But imagine this: a website with tens of thousands of pages, or even millions of pages, and they have 10 404s. These are definitely not a high-priority fix. Right?
Yes, I would ignore them, especially if your dev team has higher-priority items in their queue. Theyβre just 10 links. They donβt matterβ¦
Unless they have valuable backlinks linking to them.
Or unless those URLs are heavily linked internally, meaning users and crawlers repeatedly encounter them.
Or youβre running a news website and content is timely, and these 404 pages are ranking in search for time-sensitive keywords instead of your status 200 working content pages.
See what happened? The answer changed based on context. For every rule, there seems to be an exception.
To be a great SEO, you cannot simply operate off a checklist:Β
You have to ask:
And once again, it depends.
The real skill in SEO isnβt memorizing best practices or having the best, most comprehensive checklist.
Itβs knowing when different things apply and understanding:
Saying βit dependsβ means you understand the question well enough to know it has no single answer.
In an industry shaped by evolving algorithms, multiple platforms, and constantly shifting user behavior, knowing this is foundational.
So maybe instead of rolling our eyes every time we hear βit depends,β we should recognize it for what it is β the most honest answer in SEO.
The largest annual survey of PPC professionals finds the industry under growing pressure β more opaque platforms, weaker measurement, and AI tools that help but havenβt transformed the day-to-day.
Why we care. More than half of practitioners (53%) say PPC is harder than it was two years ago, up from 49%. The dominant reason isnβt competition β itβs that platforms are making more decisions advertisers canβt see or override, and that gap is only widening.
With 89% of digital spend flowing to just three companies, advertisers who donβt build measurement infrastructure independent of platform reporting are increasingly flying blind.

By the numbers:
What theyβre saying. Exact match keywords remain the most trusted feature (75% use them often or always). AI Max for Search has the lowest adoption of any tracked feature β 34% have never used it (but then itβs the youngest of Googleβs major updates). Auto-apply recommendations are firmly distrusted across the board.
Between the lines. Agency survival is the subtext of the whole report. Finding talent and growing revenue are both flagged as βvery or often challengingβ by 62% of agency respondents. And the threat isnβt defection to rival agencies β itβs clients cutting agencies out entirely by using AI in-house.
The big picture. Practitioners seem to have found a pragmatic relationship with AI β use it for copy and research, distrust it for autonomous decisions. The harder problem is one AI canβt solve: platforms are taking more control while giving advertisers less visibility. That gap is widening, and thereβs no clear fix in sight.
Dig deeper. The State of PPC Global Report 2026

Google is making Merchant Center for Agencies generally available in the U.S. and Canada today β giving agency teams a single login to manage, monitor, and optimize merchant clients at scale.
Whatβs included:


Why we care. Managing multiple merchant accounts across Googleβs ecosystem has historically meant jumping between logins and dashboards. Having it all surfaced in one place means problems get caught faster, before they quietly drain client revenue. And with merchandising opportunity tools built in, itβs not just a monitoring dashboard β itβs designed to actively surface ways to improve performance across your entire client portfolio.
Early results. Digital marketing agency Socium Media piloted the product ahead of the holiday season, using it to monitor client promotions, inventory, and feed diagnostics from one place β and reported 50% faster resolution on monitoring tasks as a result.
The big picture for agencies. Time spent on account monitoring and diagnostics is time not spent on strategy. Tools that compress that operational overhead β especially during high-stakes periods like Q4 β directly translate into capacity for higher-value client work. Agencies managing large retail portfolios should prioritize getting set up before the next peak season.
Whatβs next. Full details are available in Googleβs Help Center, with the rollout live in the U.S. and Canada starting today.
Every few weeks, a new study drops declaring that Reddit (or YouTube, or Wikipedia) is the most important source for AI citations. Marketers share it. Clients ask about it. Someone starts drafting a Reddit strategy.
Because it does. These analyses often flatten the nuance of prompt intent, model differences, and vertical context into a single headline number, and brands jump to start building strategies and teams around benchmarks that have nothing to do with their actual category or customer journey.
The shiny object problem in AI search is real, and itβs getting more expensive.
Tinuitiβs Q1 2026 AI Citation Trends Report (Disclosure: Iβm the senior director of AI SEO innovation at Tinuiti) tracked high commercial-intent prompts across nine verticals and seven major AI platforms, including ChatGPT, Perplexity, Google AI Mode, Google AI Overviews, Google Gemini, Microsoft Copilot, and Meta AI, over four months ending in January 2026.
The early finding is also the most important one: thereβs no universal top source. There are only patterns shaped by intent, platform, and category.
The Reddit headline is real. Across all categories and platforms we tracked, Redditβs citation share grew by at least 73% from October 2025 to January 2026 and more than doubled in some industries. For Perplexity specifically, 24% of all citations in January came from Reddit alone.
But a deeper look at ChatGPT social citations adds important context for brands: 99% of Reddit citations point to unique discussion threads, not subreddit pages, brand profiles, or corporate content, according to analysis from Profound. ChatGPT isnβt citing just anything from Reddit, so a Reddit presence simply isnβt going to cut it.
The citation opportunity lives in whether the authentic conversations happening in your category contain useful, self-contained answers and whether your brand has any presence in those conversations at all.
This means brands need to focus on driving authentic conversations, not simulating them. Fostering real community in spaces like Reddit β finding your brand ambassadors, participating genuinely, making it easy for satisfied customers to talk β is community building and reputation management. Thatβs the work that earns citations.

The vertical variance is dramatic.
A brand in OTC health looking at the aggregate Reddit growth number and assuming it applies to them is starting from the wrong baseline.
The platform layer makes it more complex still. Redditβs share on ChatGPT was above 5% in January. On Google Gemini, it was 0.1%.
If your audience is primarily finding your category through Gemini, the Reddit conversation is almost irrelevant to your AI visibility right now.
Dig deeper: A smarter Reddit strategy for organic and AI search visibility
Hereβs where the βit dependsβ argument gets uncomfortable even for brands that think theyβve done the work of platform segmentation.
Reddit accounted for 44% of all social media citations in Google AI Overviews in January. In Google Gemini, that number was 5%. Thatβs nearly a 9x difference in Redditβs influence between two AI products built, maintained, and branded by the same company.
The divergence extends across every social platform we tracked.

A brand building its AI visibility strategy around Gemini performance data could draw nearly the opposite conclusions about Reddit than a brand tracking AI Overviews. Theyβd also reach different conclusions about Medium, YouTube, and LinkedIn. Same parent company. Same logo. Fundamentally different citation ecosystems.
This is also why the volume of unique domains cited diverged so sharply across Googleβs surfaces over the same period.
By January, Google AI Mode was citing 143% more unique domains than AI Overviews β a gap that barely existed two months earlier. The surfaces are evolving at different speeds, in different directions, with different source preferences. Treating βGoogleβ as a single AI channel is like treating βsocial mediaβ as a single content strategy.
Another data point adds context: roughly 17% of AI Overview citations overlap with Page 1 organic rankings, according to BrightEdge, and that share varies significantly by industry.
Gemini, AI Mode, and AI Overviews cite different sources, and AI Overviews operate on logic thatβs largely separate from the traditional Google rankings youβve spent years optimizing. The surfaces diverge from each other and from organic at the same time.
Dig deeper: AI Overview citations: Why they donβt drive clicks and what to do
Consider what happened between Amazon and Walmart on ChatGPT over the past four months.
In October 2025, Amazon led all major multi-category retailers in ChatGPT citations. By November, its share had dropped sharply. The major cause: Amazon has been aggressively blocking AI crawlers, with nearly 50 specific user agents restricted in its robots.txt file by late January, including all three of OpenAIβs crawlers.Β
Walmart, which hasnβt taken the same approach, filled that gap and has seen its ChatGPT citation share rise steadily ever since.

Amazonβs strategy is deliberate, but complicated. By blocking the Google-Extended crawler that feeds Gemini while allowing Googlebot (which powers AI Mode and AI Overviews), Amazon is being very intentional about what information it allows direct access to, with the trade-off that itβs not included in the marketplace set in ChatGPT embedded ecommerce.
Amazon would clearly rather drive users directly to Amazon, where it controls the cross-sell, the upsell, and the recommendation via its own shopping agent, Rufus, than have its long history of product data and user-generated content fuel competitor platforms. Amazon sued Perplexity in late 2025 over exactly this kind of access dispute.
The result is that Amazonβs citation share looks completely different depending on which platform youβre analyzing. In Google AI Overviews, it still holds a commanding lead over every other ecommerce player. On ChatGPT, Walmart has taken over.

Dig deeper: How AI-driven shopping discovery changes product page optimization
Citation studies (including ours) should be used as directionals. When you see Reddit growing, check whether itβs actually part of your customersβ research journey in your category. Listen to the conversations, identify your brand ambassadors, and assess the level of effort versus the impact.
When a platform dominates headlines, weigh whether it matters for your vertical before building a brand-new strategy and team around it.
Value, effort, and impact look different for a beauty brand than a manufacturing company, even if theyβre reading the same AI search news cycle.
Test with that context. Use aggregate benchmarks to generate hypotheses, then validate them with your own data. The brands building durable AI visibility are doing the less glamorous work of understanding their own category well enough to know where to show up and why.
For the full methodology and findings, see Tinuitiβs Q1 2026 AI Citation Trends Report (registration required), developed with Profound.

Optimizing your clientβs TripAdvisor listing is an important part of the local SEO ecosystem, even though itβs often treated as a secondary channel. Done well, it can increase visibility, drive more qualified website traffic, and strengthen brand positioning and online reputation.
TripAdvisor frequently appears in search results for tourism and hospitality businesses and often serves as a key third-party discovery touchpoint. Treating it as a strategic SEO asset β not just a review site β can create meaningful advantages in visibility, trust, and conversions.
TripAdvisor is a travel booking and decision-making platform where users arrive with clear conversion intent, typically in the mid-to-lower funnel stages. It functions as both a comparison tool and a marketplace for hotel reservations, excursions and attractions, restaurants, and cruises.
Reviews on TripAdvisor matter, but they donβt operate in isolation. Their impact depends on the overall quality of the business profile, including the clarity of its unique value proposition and the strength of its brand image.
TripAdvisor offers less control than an owned website and doesnβt align with classic technical SEO frameworks the way platforms like Amazon or LinkedIn do. Still, Google continues to surface TripAdvisor prominently across tourism and local business searches, reinforcing its role as a trusted external source.
TripAdvisor is also known for operating one of the most powerful programmatic SEO architectures in the industry, with millions of URLs indexed by city, category, search intent, and experience type. The platform is estimated to receive around 490 million unique visits per month.
Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026
Optimizing your business profile on Tripadvisor doesnβt just mean hoping users will leave positive reviews of your location or services. Follow these three strategies to strengthen your listing.
Strategically responding to reviews can strengthen semantic SEO by enriching the contextual signals around your business and increasing the likelihood of being referenced in AI-powered search experiences and LLMs.
For example, if a guest mentions enjoying the hotel pool during a family trip but provides little detail, your response can thank them for their stay while highlighting the range of water-based activities available for children and older guests. This builds richer context around the hotelβs leisure facilities beyond the original review.
Itβs also important to guide guests when possible. For example, when using QR cards to encourage reviews, briefly explain the value of writing a detailed, descriptive review rather than a one-line comment.
Images on TripAdvisor act as immediate scroll stoppers. They must be visually strong, eye-catching, and vibrant, clearly conveying a positive emotion and a high-quality experience within seconds.
If youβre unsure which images to place in cover or top positions, review performance data from platforms like Instagram or TikTok to identify visuals that generated the highest engagement.
Ideally, refresh images every 4 to 6 weeks. Each caption should include a clear description, written in natural, fluent language, that provides context for the dish or service, the target audience, and the type of experience offered.
For example: βGrilled salmon served on our sea-view terrace, a popular choice for solo travelers during the summer.β
Dig deeper: The local SEO gatekeeper: How Google defines your entity
Proper tagging and categorization is one of the most poorly handled aspects of TripAdvisor. Incorrect categorization or missing relevant tags directly affects internal visibility, influencing rankings, filters, and curated lists.
As part of TripAdvisorβs programmatic SEO architecture, these signals also influence how pages are structured and surfaced in Google when users search for local businesses.
For example, to appear in TripAdvisor rankings like [The 20 Best Restaurants for a Romantic Valentineβs Day Dinner in New York], your categories and tags must cover the full range of real experiences your business offers.
Itβs surprising how many businesses still have duplicate listings on TripAdvisor in 2026. This usually happens because creating a listing is relatively easy and doesnβt require official business verificationβsomething the platform could improve.
However, claiming and merging duplicate listings does require official documentation to verify ownership, such as a business registration certificate or a recent utility bill linked to the business address.
Make sure your business description, menu item names or services, opening hoursβespecially public holiday hoursβand any other sensitive business information match exactly what appears on your Google Business Profile.
From a brand SERP perspective, particularly in tourism and hospitality, TripAdvisor is often the main third-party channel where users discover your brand.
In many cases, a TripAdvisor listing appears above your own website. An incomplete, outdated, or poorly optimized profile can weaken trust before users reach your site. Optimizing TripAdvisor means owning a critical part of your brandβs search footprint.
Dig deeper: Want to win at local SEO? Focus on reviews and customer sentiment
One advantage of TripAdvisor business profile SEO, which receives relatively little attention, is that when you execute it properly and quickly, it can become a clear competitive advantage and a strong strategic position against competitors.
Just keep the following in mind for TripAdvisor rankings:
TripAdvisor SEO depends on consistency, attention to detail, and understanding how reviews, content, and engagement signals work together to influence rankings and user decisions.
When you do it well, your customers become your strongest marketing asset.
Nvidia confirms DLSS 4.5 upgrade for War Thunder Nvidia has confirmed that War Thunder will be getting upgraded with support for DLSS 4.5, improving the fidelity of Nvidiaβs AI upscaling solution. With this change, Nvidia users should be able to enjoy clearer visuals in War Thunder, especially when upscaling from lower resolutions. Nvidia has boasted [β¦]
The post War Thunder is getting upgraded with Nvidia DLSS 4.5 appeared first on OC3D.
Dutch Web Services delivers production-ready backend infrastructure with AI-powered generation, enterprise authentication, and fully managed deployment. Define your data model with a visual schema builder, generate APIs automatically, and ship to production with monitoring and environment controls. Use real-time CRUD, advanced entity management, OpenAPI specs, and third-party integrations. Secure apps with multi-tenant isolation, machine-to-machine auth, SSO and SAML, and role-based access. Scale with team collaboration, custom domains, SLA-backed support, and optional on-premise deployment.

LinkedIn made some good moves last year that Iβve seen pay off for our suite of B2B clients. Now that weβre into 2026, with yearly marketing goals in focus, Iβve got some recommendations based on our 2025 learnings for you to test and leverage in the coming months. Those include:
Letβs put a magnifying glass on each and explain the benefits you stand to gain.
Even though Meta and TikTok are more natural fits for video, LinkedIn isnβt immune to the video movement β particularly short-form video (between 7-15 seconds). While having video content is an important line item on your marketing strategy plan, the right content is even more important.
There are plenty of ways to leverage video, including new-ish placements like First Impression Ads. What I recommend is that you try video ads in the feed first to compare performance and engagement with other types of in-feed ads youβve been running.
The usual caveats apply here:
Dig deeper: LinkedIn study reveals how B2B video ads can gain +129% engagement lift
One of the toughest parts of B2B advertising is engaging potential customers on behalf of a business or corporate entity. Thought Leader Ads (where companies can essentially boost content from employee accounts) have actually been around for a couple of years, but we got serious about testing them in 2025 and earned much higher engagement than with typical ads from business profiles.
TLAs open up some creativity, too. Humor-focused posts, for one, are a lot more natural a fit from a personal account.
As with other boosted content, be judicious about where you invest. If a post already has traction organically (and thatβs become harder over the last year as LinkedIn has throttled back reach) and makes a good business case for working with your company, thatβs a good candidate for a TLA.
A couple of caveats here, too:
Dig deeper: LinkedIn Ads retargeting: How to reach prospects at every funnel stage
In the latter half of 2025, we ran a significant number of tests with personalized LinkedIn ads across different geos and using different campaign types.
In our global campaigns, we saw an average of >20% improvement in cost per lead, with higher CTR and lower CPC. U.S. campaigns were even more successful. CPLs dropped 33%.
Per our LinkedIn rep, European users in particular value privacy more than U.S. users, so it makes sense that personalization was more effective stateside. Either way, and even in U.S. campaigns, personalized ads began to show signs of fatigue after about a month.Β
We responded by combining personalized and non-personalized ads into one campaign to lower the frequency of the personalized ads β and also allow for side-by-side comparisons in the same environment.
Dig deeper: LinkedInβs new playbook taps creators as the future of B2B marketing
If youβve run Conversions API (CAPI) and enhanced conversions in Meta and Google, youβre familiar with the idea of Qualified Lead Optimization. Essentially, this is LinkedInβs way of letting you integrate your first-party data into the platformβs back end to help its algorithm find higher-quality users.
Now, this isnβt quite as effective as its Meta and Google counterparts yet, but weβve seen an increase in the proportion of qualified leads.
To test it:
This one is tactical, but itβs saved me a ton of time in our accounts, so itβs worth making sure youβre aware of it.
In March 2025, LinkedIn launched a few updates to Campaign Manager, including a new feature that makes it easier to duplicate ads across campaigns and accounts. This has greatly improved our time to launch new campaigns β thereβs no downside to getting your hands around it.
We havenβt yet aggressively tested LinkedInβs new CTV capability, but weβre keeping an eye on industry perspectives. This can be a great medium to gauge the messaging and positioning that works for your brand with niche targeting options before rolling out big-screen campaigns.
In the scheme of things, LinkedIn provided some quality-enough updates last year for us to shift more client budget there. As always, you need to carry the right expectations for the platform and make sure you have a strong methodology for measuring its value in your pipeline.Β
With those in place, and with a rock-solid understanding of your ICP that lets you fully leverage LinkedInβs targeting levers, Iβm betting LinkedIn can be a pleasantly surprising source of growth in the coming months.
Pearl Abyss released detailed hardware requirements for Crimson Desert Crimson Desert is due to be released on PC and consoles next week, and Pearl Abyss have released detailed PC and console specifications for their game. This includes detailed resolution, settings, and framerate targets for their game. On PC, Crimson Desert appears to be well optimised, [β¦]
The post Crimson Desertβs PC/Console specifications put other developers to shame appeared first on OC3D.


Nvidiaβs now ready for Crimson Desert and Death Stranding 2 NVIDIA has officially released their new GeForce 595.79 WHQL driver for Death Stranding 2: On The Beach and Crimson Desert. With this driver comes crash fixes for Crimson Desert, graphical fixes for Resident Evil Requiem, and crash fixes for Star Citizen. This new driver contains [β¦]
The post Nvidia launches its GeForce 595.79 driver for Crimson Desert and Death Stranding 2 appeared first on OC3D.
Cognivox lets developers chat with advanced AI to test, debug, and automate API workflows. Design multi-step pipelines in a visual DAG builder, chain and branch requests, and run intelligent fuzzing for SQL injection, XSS, and boundary cases. It parses OpenAPI, Postman, and Swagger to auto-generate coverage, self-heals when schemas drift, and delivers scheduled email reports. Use OAuth login, organize tests in collections, and monitor runs in real time with pass/fail summaries and PDF reports.
Graphorb is a privacy-first AI assistant for work and learning. It offers end-to-end encrypted chat, no ads, and secure infrastructure, while providing fast, high-quality responses for writing, document analysis, coding help, and study support. You can choose from free, personal, pro, and corporate plans with clear limits and global access, and keep chat history as you scale from individual use to team deployments.

WordPress security release 6.92 fixes 10 vulnerabilities but also causes some sites to crash, requiring bugfix release 6.9.3.
The post WordPress Releases A Security Update Followed By A Bugfix appeared first on Search Engine Journal.
AI-powered credit repair: upload your report and get dispute letters citing federal law for a one-time fee of $49, no subscription. ScoreAgent analyzes your credit report from all three bureaus, finds errors and inconsistencies, and generates personalized FCRA dispute letters in minutes. What credit repair companies charge $1,800 for, ScoreAgent does for $49.

ASUS has released a new BETA BIOS for its X870 motherboards, which adds support for a future Ryzen CPU, could it be Ryzen AI 400 or 9950X3D2? ASUS X870 Motherboards Receive Support For A Future AMD Ryzen CPU In Latest BIOS ASUS has rolled out a new BETA BIOS for its X870E and X870 motherboards, which adds support for a future CPU. This new BETA BIOS is based on the AGESA 1300a firmware. It is stated that this BIOS may be released as a stable version on the main webpage, with X870E/B850 being the first to roll out this week, [β¦]
Read full article at https://wccftech.com/asus-x870-bios-adds-future-ryzen-cpu-support-ryzen-ai-400-or-9950x3d2/

The fast and private text expander for macOS and iOS
Native Swift apps + a real cloud database. One prompt away.
Cursor for Animated Videos
Learn math and science with interactive visual explanations
Turn feature ideas into stakeholder-ready code prototypes
Give agents everything they need to ship fullstack apps
The open standard for Generative UI
Serve Any AI Model, Faster & Cheaper
1:1 text-acoustic alignment for 5x faster speech generation
Google's first natively multimodal embedding model
Build a team of AI specialists that deliver quality work
Comprehensive memory management for Claude Code
Ask AI about anything on your screen with a single hotkey
Cursor for video editing
Your AI reputation coach for LinkedIn, X, Reddit & more
Google Drive for Your Speech w/ 99% AI transcription
The complete web data toolkit for AI agents
Be in AI answers before your competitors. We do it for you.
A visual AI knowledge base that organizes what you save
Summon your army of AI agents
Duto is a canvas that lets you describe what you want to create, such as product ads, e-commerce visuals, realistic talking head clips, Instagram and TikTok videos. It either builds the workflow for you or lets you wire it together yourself. A text prompt goes into an image model, that image into a video model, then video gets lip-sync or audio added. You can run it again on any input without rebuilding anything. Models like Veo 3, Kling 3.0, and Seedance are all in the same canvas, and you can push a batch of items through in parallel.


AutoText is a Chrome extension that adds Cursor-like functionality directly to Gmail. It's time we bring the intelligence of coding IDEs to email composition. Building a system that understands thread context, operates safely inside Gmailβs editor, moves through a secure auth stack, and delivers low-latency suggestions was very challenging. The UI is fast, security is serious, and the inference stack is heavily engineered. It's free! Please share your feedback and I'll improve it ASAP.
Apple only stated that the MacBook NeoΒ ships with either a 256GB or 512GB SSD, because if the company had mentioned how fast its NAND flash storage is, potential customers might be discouraged from picking it up. According to the latest comparison, the most affordable portable Mac in the companyβs lineup doesnβt even cross the PCIe NVMe Gen 3 bandwidth. In the latest Blackmagic Disk Speed Test, Appleβs M5 MacBook Air, the second most affordable portable Mac, has up to 316 percent faster SSD speeds than the MacBook Neo If there was one shocking compromise that we didnβt know about until [β¦]
Read full article at https://wccftech.com/macbook-neo-ssd-speeds/

The company announced location-based increases ranging from 2% to 5% and said it would no longer be covering the cost of digital service taxes and other fees.
RemodelerIQ helps homeowners negotiate remodeling bids with confidence. Upload a contractorβs estimate and its AI compares line items to local market rates, audits labor costs, and flags missing items, red flags, and non-standard terms. You get a clear report with savings opportunities and scripted questions for negotiations. RemodelerIQ also scores bid risk, checks state law compliance, and benchmarks prices using data from sources like Zonda, BLS, Houzz, and the Federal Reserve. Start free or unlock unlimited with Premium.
It's unusual to see a hardware maker recommend older, slower hardware, but ASRock justifies it by noting that the Arc A380 now supports the latest XeSS 3. ASRock Says DeskMeet Series + Intel Arc A380 Will Deliver "Strong Performance" While Providing an "Outstanding Value" The sky-high prices of memory and SSDs have made it extremely difficult for gamers to build a budget gaming PC. Previously, it was easy to build a powerful gaming PC for 1080p gaming for a sub-$1000 gaming PC. It's still possible, assuming you can manage both memory and a storage drive at good prices or by [β¦]
Read full article at https://wccftech.com/asrock-recommends-intel-arc-a380-for-deskmeet-series-for-gamers-on-a-budget/

The new self-serve feature allows creators to select from a range of relevant products and earn sales commissions from their Reels content.
A new study in partnership with Omnicom and Ipsos found that the appβs users are more likely to watch a show after discovering it on social media platforms.
Β
Two news studies identified the professional social network as a key resource across multiple artificial intelligence chatbots.
Β
Government officials, journalists and political candidates will join a pilot group of users as the company tests its image misuse protocols.
Β
Creators Matt Schlicht and Ben Parr will join Metaβs Superintelligence Labs as part of the deal, which aligns with the companyβs previous artificial intelligence investments.
Β
The company said women including Brittany Wilson Isenhour, Gym Tan and Madison Humphrey were driving discussions in the app.Β
Javelina DNS is enterprise-grade DNS infrastructure built for reliability, security, and scale. Powered by proven internet-scale technology used by major platforms, Javelina enables real-time traffic rerouting, high-availability DNS, and secure global performance for modern applications. With bank-level security, high uptime, and flexible capacity, teams can manage domains and traffic with confidence as they grow.
Intel's next-gen CPU series will support much higher memory frequency out of the box, compared to the current lineup. German PC Maker ECS Showcases Liva P300 Embedded PC With Intel Core Ultra 400S Processor; Specs Reveal Support for DDR5-8000 The upcoming Intel CPU series is going to deliver a much better memory support, as spotted in one of the mockups. At the Embedded World 2026 event in Germany, the hardware manufacturer ECS showcased its new embedded PC mockup called Liva P300, based on the Intel Nova Lake S. It's a mockup, so obviously, there is likely no Intel Nova Lake [β¦]
Read full article at https://wccftech.com/intel-nova-lake-s-based-embedded-pc-mockups-reveal-cpus-support-8000-mt-s-ddr5/

OnboardingHub centralizes customer onboarding into a single, guided experience with clear milestones and completion tracking. You can build step-by-step guides with a visual builder, personalize journeys by role or segment, and provide customers with a progress hub they can access with one link.
Track drop-offs with step-level analytics, embed tools like Calendly, Typeform, and YouTube, and manage contacts and organizations in one place. You can run multiple workspaces, enforce role-based permissions, and scale repeatable onboarding without adding headcount.
The launch of Apple's 'budget-oriented' MacBook Neo was a shocker for manufacturers like ASUS, but the firm now intends to compete in the same segment. ASUS's co-CEO Claims Apple's MacBook Neo Is Limited To Content Consumption Due To Its Specifications Apple's MacBook Neo is focused on bringing the intriguing elements seen in the traditional Mac and slapping a mobile SoC into a price segment that has made the Windows laptop industry rethink its strategy. Coming in at $599, the MacBook Neo is seen as a 'sigh of relief' in a market plagued by memory shortages and pricey devices, and when [β¦]
Read full article at https://wccftech.com/asus-dismisses-apple-macbook-neo-as-just-a-tablet/

The class-action lawsuit, branded as "PlayStation You Owe Us", filed by consumer rights champion Alex Neill against Sony's PlayStation has finally kicked off in earnest today, with the plaintiff's legal counsel, Mr Palmer, delivering the opening statement (the proceedings were livestreamed on the Competition Appeal Tribunal's official website). The legal action was filed in 2022 to challenge Sony's monopoly over the PlayStation Store digital platform, which, according to the plaintiffs, led to higher game prices for consumers. The damage figure sought by the plaintiff has changed, though: it initially stood atΒ $5 billionΒ in 2022 and rose to Β£6.3 billion in November [β¦]
Read full article at https://wccftech.com/sony-playstation-store-class-action-trial-uk/

VoidMob brings together three services that privacy-conscious users usually manage separately: real 4G/5G mobile proxies, SMS verification from actual SIM cards, and instant eSIM data in over 200 countries. Everything runs on real carrier infrastructureβno datacenter IPs, no VoIP numbers that get flagged.
With one dashboard, a single balance, crypto payments, and no KYC, you get trusted mobile IPs for browsing, real phone numbers that pass platform checks with a near-perfect success rate, and eSIM activation in seconds. Developers can also access everything programmatically through an open-source MCP server for AI agents or a standard API.
As they do pretty much every month, thanks to the same reliable source, Billbil-kun at Dealabs, the monthly PS Plus Extra and Premium Game Catalog titles have leaked. Or at least, some of them have, and there will likely be a few more revealed tomorrow when the official announcement is live. According to the report, while players won't be able to expect Dragon's Dogma II, we will get Warhammer 40K: Space Marine 2, Metal Eden, Persona 5 Royal, and more. The note about Dragon's Dogma II is regarding a rumor that gained some traction regarding the March 2026 batch of [β¦]
Read full article at https://wccftech.com/ps-plus-extra-march-2026-space-marine-2-persona-5-royal-metal-eden/

NVIDIA has struggled to balance its consumer and enterprise opportunities, as Microsoft's CEO has reminded Jensen of his 'gaming beginnings'. The Buildup Towards Modern-Day AI Infrastructure Was Initiated With NVIDIA's Focus on Gaming Ever since the launch of ChatGPT, demand for NVIDIA's AI chips has grown to unprecedented levels, to the point that the firm's market capitalization has soared to the trillions in just a few years. Since then, NVIDIA has been predominantly focused on catering to enterprise customers, and the inclination towards them comes at a cost, which we have already begun to see with consumer GPUs. There has [β¦]
Read full article at https://wccftech.com/nvidia-wouldnt-exist-without-gaming-says-microsoft-ceo/

Sucker Punch Productions has released Ghost of Yotei Legends as free DLC for all owners of Ghost of Yotei today on PS5. It's a welcome return of the surprise multiplayer mode that players loved digging into in Ghost of Tsushima, but its arrival on PlayStation today, days after we learned more about Sony's evolving strategy with its PC ports, makes the strategy pivot sting even more. When Ghost of Tsushima arrived on PC, it set new records for concurrent player counts on Steam amongst PlayStation's current suite of PS5 to PC releases and was the best-selling game in the US [β¦]
Read full article at https://wccftech.com/ghost-of-yotei-legends-out-now-ps5/

Hope is the engine of optimism and the indomitable force behind a will to change for the better. But, when it comes to Google's Pixel lineup, that hope is in short supply and getting progressively scarce with each passing year, especially judging from the latest CAD renders of the Pixel 11 Pro Fold, which depict bezels the size of football fields, a bloated, chunky design, and nary a concerted effort to add additional camera sensors or tweak the device's aspect ratio. Google's Pixel 11 Pro Fold: A masterclass in ugly aesthetics Steve H. McFly has now revealed the CAD renders [β¦]
Read full article at https://wccftech.com/pixel-11-pro-fold-when-will-google-ever-get-over-its-fetish-for-ugliness/

Control Resonant will launch on PC with path tracing, DLSS 4.5, and RTX Mega Geometry Nvidia has confirmed that Control Resonant will be launching with all of the latest RTX goodies. On PC, the game will ship with DLSS 4.5, Path Traced effects, and Nvidia RTX Mega Geometry. Control was a graphical showcase when it [β¦]
The post Control Resonant will be Nvidiaβs next big RTX showcase appeared first on OC3D.
Nvidia confirms DLSS 4.5 Dynamic and 6x Frame Generationsβ release date Nvidia has announced that DLSS 4.5 Dynamic and 6x Frame Generation will become available to RTX 50 series GPU owners on March 31st. This support will arrive through DLSS Overrides as part of an opt-in Nvidia App beta update. DLSS 6x Frame Generation increases [β¦]
The post DLSS 4.5 Dynamic and 6x Frame Generation to launch this month β Nvidia confirms appeared first on OC3D.


Apple's $599 MacBook Neo arrives with a premium aluminum build, bright display, solid A18 Pro performance, and battery life that punches well above its price. Reviewers largely agree it resets expectations for what a budget laptop can be, despite the evident trade-offs that remind you how Apple kept costs down. Don't miss our editorial on the Neo for insight into the industry implications beyond the device itself.
Draw On Screen lets you annotate your screen live to tell clearer stories in demos, calls, and lessons. You can sketch lines and boxes, type text, use a laser or spotlight, and toggle your webcam with a minimal interface that stays out of the way. Record exactly what you see without complex editing or compositing to produce content faster and keep viewers focused.
A survey of B2B decision-makers found peer recommendations are trusted nearly twice as much as AI chatbots, and white papers rank last for perceived value.
The post B2B Buyers Trust Peers Over AI Chatbots, Report Finds appeared first on Search Engine Journal.
A federal judge granted Amazon a preliminary injunction barring Perplexity's Comet AI agent from accessing Amazon accounts and ordering data destroyed.
The post Amazon Wins Preliminary Injunction Against Perplexityβs Comet appeared first on Search Engine Journal.
Monster Hunter Stories 3 proves that it's not just about the thrill of the hunt, but setting up the ecology to support it for generations to come.
Read full article at https://wccftech.com/review/monster-hunter-stories-3-twisted-reflection/

Publisher and developer ProbablyMonsters have revealed a new gameplay trailer for the upcoming third-person stealth-action brawler, Nekome: Nazi Hunter, a game that looks like what you'd get if you took some of the cinematic brawling of Sifu and mixed it into a story where you play as a a young Romani man on a war path against the Nazis for killing his family. The trailer introduces us to the game's protagonist, Vano Nastasu, who is seeking revenge for the aforementioned homicide. There's no release date or early release window for the game yet, but we do know it'll be arriving [β¦]
Read full article at https://wccftech.com/nekome-nazi-hunter-gameplay-trailer-probablymonsters/

The last general Nintendo Direct event took place in September 2025, and while we've had several Nintendo Direct events this year (like two Super Mario Galaxy Movie Direct events), we're still waiting for 2026 to get its first big Direct event. Unfortunately, this year's iteration of Mario Day so far has not included a surprise Direct announcement, but we did get a release date for Yoshi and the Mysterious Book on the Nintendo Switch 2, the game that closed out September's Direct. It's frankly still a bit shocking that the Direct event that revealed several ways Nintendo was celebrating the [β¦]
Read full article at https://wccftech.com/yoshi-and-the-mysterious-book-nintendo-switch-2-release-date/

Today, Pearl Abyss has revealed detailed PC, console, and macOS specifications for its highly anticipated upcoming open world action/adventure game Crimson Desert. The minimum and recommended system requirements for PC were actually first announced (and then lowered) months ago, but this time the South Korean development team has provided much more in-depth information for gamers on how the game powered by the BlackSpace engine will run on their systems. Minimum Low Recommended High Ultra Graphics Preset Minimum Low Medium High Ultra Performance Specs Upscaled 1080p (from 900p) 30 FPS 1080p 30 FPS 1080p 60 FPS 4K 30 FPS 1440p 60 [β¦]
Read full article at https://wccftech.com/crimson-desert-gets-detailed-pc-console-mac-specs/

Star Wars Knights of the Old Republic Remake could be setting out to be an impressive game, contrary to what its long stay in development hell suggests. X user and known insider Luke "100% Star Wars" claims to have seen the remake, and believes canceling Aspyr's version of the remake and starting from scratch was the right choice. While the insider, who was among the first to talk about the game's troubled development, did not reveal any specifics about the remake, which was just confirmed to still be in development last week by Saber's CEO Tim Willits, they did suggest [β¦]
Read full article at https://wccftech.com/star-wars-knights-of-the-old-republic-remake-details-2026-reveal/

NVIDIA's GDC 2026 announcements also include updates for the RTX Remix remastering platform. The biggest news is the new Advanced Particle VFX system, which will be released some time next month. With this new system, NVIDIA is directly responding to the RTX Remix's open-source community's top feature request: a substantially more powerful and expressive particle system. The Advanced Particle VFX system introduces three distinct capability areas: The other news is the availability of a Quake III RTX early access (version 0.6) made by modder Woodboy, who took it upon themselves to remaster several levels of the original game by id [β¦]
Read full article at https://wccftech.com/nvidia-rtx-remix-advanced-particle-vfx/

Starting July 1st, Meta will add βlocation feesβ to ad buys targeting users in six countries β effectively offloading the cost of European digital services taxes onto the advertisers themselves.
The numbers. Fees will match each countryβs digital services tax rate:
How it works in practice. Per Metaβs email to advertisers β β$100 in ads delivered to Italy will cost $103, plus any applicable VAT on top of that.β
The fine print. The fees apply to where the ad is delivered, not where the advertiser is based β meaning a US brand running campaigns targeting French users will pay the French rate regardless.
Why we care. This is a direct, unavoidable cost increase hitting European campaigns on July 1 β with no opt-out. If youβre running ads targeting users in France, Italy, Spain, Austria, Turkey, or the UK, your effective CPM and CPA benchmarks are about to get more expensive, which means existing budgets will stretch less far and current ROAS targets may no longer be achievable without adjustment.
And since the fee is based on where the ad is delivered rather than where youβre based, even non-European brands arenβt off the hook.
The big picture for advertisers. This isnβt unique to Meta β Google and Amazon already charge similar pass-through fees. But itβs a meaningful shift in how European ad budgets need to be calculated, and campaign managers should revisit their cost models before July 1 to account for the added overhead across affected markets.
The backdrop. Digital services taxes have been a flashpoint between Europe and Washington. The Trump administration has threatened retaliation against European firms over the levies β adding geopolitical uncertainty to what is already a complex compliance landscape for global advertisers.
Dig deeper. Meta Hikes Fees for Advertisers to Cover Europeβs Digital Taxes (subscription is needed)
We all want media coverage.
Positive coverage creates exposure, authority, trust, and often valuable backlinks.
But for many people, the path to getting it is a mystery. Others believe myths about how it works.
Some believe you have to be at the very top of your industry before the media will care about your story.
Thatβs simply false.
Others believe you can simply buy your way into media coverage.
Thereβs a small degree of truth to that.
You can find contributors willing to feature you (or your client) for a fee, but this blatantly violates every outletβs contributor guidelines. You may land the feature, but editors will eventually find out.
What happens then?
First, the article gets deleted or any mention of you and your links gets removed. Then, the contributor gets removed from the platform and blacklisted in the media industry. Finally, you get blacklisted too.
Good luck getting featured again. It wonβt happen.
The reality is that you can get featured in the media.
You just need to understand the process and execute it consistently.
You probably have a great story β you just may not realize it yet.
The media has to produce a constant stream of content. If you have a strong story, youβre already one-third of the way to getting featured.
Letβs start with what doesnβt make a great story.
So what does make a great story?
Like the answer to most SEO questions: it depends.
A great story starts with an actual story.
You have to explain, in an engaging way, why anyone should care about what you have to say.
For example, I often tell the story of how I used PR to rebuild my success after being on my deathbed.
I explain that my agencyβs specific PR approach comes from the exact process I used to rebuild my own business β and that I want to give others the same advantage.
And my story is easily verifiable.
But you donβt need a life-or-death struggle to have a compelling story.
You just need a story that shows a deeper purpose. A mission. Something people can get excited about and care about.
Even with the best story in the world, you still need an effective pitch.
Your pitch has to cut through the noise and grab attention. Journalists, producers, and others in the media are inundated with pitches β many receive hundreds every day. Your pitch has to tell your story clearly and quickly, and motivate them to respond.
Easier said than done.
Most pitches are sent by email, so most people start with the subject line. Thatβs the exact opposite of what you should do.
Start with the body of the email. Thereβs a reason for this, which weβll get to shortly.
Find a way to connect your story to current events. If a topic is already popular in the media, other outlets are more likely to cover it.
But remember: while the story involves you, it isnβt about you.
You have to pitch from the perspective of what the audience wants. The journalistβs, editorβs, or producerβs needs come second, and yours come in a distant last place.
Sorry, thatβs just the way it is.
You need to distill your story and why the audience should care into a few sentences. You can add a little more detail after that, but keep it short. If they see a wall of text, theyβll likely delete your email.
Once your pitch is solid, write your subject line. It should be short, punchy, and aligned with your pitch.
Short and punchy matters because the subject line determines whether they open your email.
If the pitch doesnβt align with the subject line, theyβll likely delete the email without reading it. Getting attention means nothing if they donβt read the message.
I once saw a publicist use a subject line that certainly grabbed attention, but it had zero positive impact and damaged his reputation.
What was it?
βFuck You!β
Bottom line: your pitch must quickly and clearly show the value the audience will get, and your subject line must grab attention in a positive way while aligning with the pitch.
PR isnβt a numbers game.
Yet people treat it like one. They buy or compile lists of media contacts and blast their pitch to anyone they can find.
Thatβs no different from spam emails selling generic Viagra.
Success comes from sending the right pitch to the right people at the right time.
Finding the right people means identifying journalists, producers, and other media contacts who cover the types of stories youβre telling.
Several expensive tools can help you find these contacts and their information. But you can often find the same information with a search engine and social media. In fact, thatβs how I built most of my media relationships.
As for the right time, thatβs largely a matter of chance.
Thereβs no magic formula.
The time of day you send your pitch doesnβt matter much unless itβs extremely time-sensitive, which most business topics arenβt. Producers often check email at certain times, but they wonβt touch it while preparing for or running their show.
Now hereβs something you need to avoid:
Donβt bombard them with follow-up emails!
For truly time-sensitive stories, it may be acceptable to follow up within the same week. In most cases, though, wait about a week. Frequent follow-ups will annoy journalists, producers, and other media contacts.
Stop after two or three follow-ups. If you havenβt received a response by then, they likely arenβt interested in the story.
Try not to take it personally. They probably wonβt tell you itβs not a fit. Given the sheer volume of pitches they receive, responding to every one would be a full-time job.
Most of your pitches wonβt result in media coverage.
The problem is that most people stop after a rejection or no response.
Thatβs crazy to me.
I canβt tell you how many times Iβve heard βnoβ or received no reply before finally landing a feature.
It happened because I didnβt pitch once and move on. These contacts all started as strangers, but I invested time and energy in building real relationships.
As a result, when I reach out, they open and read my emails because Iβm not a stranger. Those relationships make it far easier to turn a pitch into media coverage.
Most initial outreach wonβt lead to coverage. But if you nurture the right relationships, youβll eventually build a network of responsive press contacts.
Perplexity AI must stop using its Comet browser agent to make purchases on Amazon. A federal judge sided with Amazon in an early ruling over AI shopping bots.
Why we care. The case targets a core promise of AI agents: completing tasks like shopping on a userβs behalf. If courts restrict how agents access sites, AI agents could face strict limits when interacting with logged-in accounts on major websites.
What happened. U.S. District Judge Maxine Chesney granted Amazon a preliminary injunction Monday in San Francisco federal court.
Catch-up quick. Amazon sued Perplexity in November, accusing the startup of computer fraud and unauthorized access. The company said Comet made purchases from Amazon on behalf of users without properly identifying itself as a bot.
Whatβs next. The order is paused for one week to allow Perplexity to appeal.
What theyβre saying. Amazon spokesperson Lara Hendrickson told Bloomberg (subscription required) the injunction βwill prevent Perplexityβs unauthorized access to the Amazon store and is an important step in maintaining a trusted shopping experience for Amazon customers.β



The Lamina Research Collective develops a phone companion powered by Learned Intelligence to support personal safety. It triggers emergency alerts, shares a GPS beacon, enables silent mode, and auto-contacts trusted people when needed. The companion offers persistent memory and voice interaction to act as your personal AI, available via subscription or a one-time purchase.
Some can't stop panning Apple over a myriad of compromises that it undertook to price the MacBook Neo at a base price of $599. Others can't stop gushing over the fact that Apple just launched a fairly decent, if somewhat spartan, budget MacBook, one that is near-perfect for browsing, social media consumption, and research. In our roundup of the Apple MacBook Neo's first impressions and initial reviews, we've tried to cover all of the bases, bringing you appreciative brownie points as well as merited criticism. Here is everything that is being said about Apple's MacBook Neo CNET's Matt Elliott is [β¦]
Read full article at https://wccftech.com/apple-macbook-neo-a-roundup-of-first-impressions-and-reviews-from-acerbic-commentary-to-gushing-paean/

Today may be Mario Day for Nintendo fans, but it's Game Developers Conference (GDC) 2026 week for the rest of the video game industry. On top of B2B-focused announcements from companies like Razer, this week also brings us plenty of insights into the video game industry and some of the biggest games from the past year, like 2025's Game of the Year, Clair Obscur: Expedition 33. Developer Sandfall Interactive is one of many hosting talks during GDC 2026, and during a talk it hosted, it revealed a significant aspect regarding how the studio put the game together, and for a [β¦]
Read full article at https://wccftech.com/sandfall-interactive-used-unreal-engine-blueprints-for-all-clair-obscur-expedition-33-gameplay-systems/

NVIDIA has announced that CDProjectRed's upcoming title, The Witcher 4, will leverage its updated RTX Mega Geometry technology, offering higher frame rates and lower VRAM usage. CDProjectRed & NVIDIA To Integrate Next-Gen RTX Mega Geometry Technology In The Witcher 4 With its RTX 50 GPUs powered by the Blackwell architecture, NVIDIA introduced a new technology called RTX Mega Geometry. This technology clusters millions of triangles that make up tens of thousands of objects you see in every scene. These clusters are compressed and cached over many frames, where they are intelligently reused as the player traverses the world. This makes [β¦]
Read full article at https://wccftech.com/the-witcher-4-gets-updated-nvidia-rtx-mega-geometry-higher-fps-lower-vram-use/

NVIDIA has announced that its MFG 6X mode arrives on 31st March, and more AAA titles with DLSS 4.5 & Path Tracing are coming. NVIDIA MFG 6X "Dynamic Multi Frame Generation" Technology Will Be Available on 31st March NVIDIA's Multi-Frame Generation technology, or MFG, is getting its latest update on 31st March with 6x Mode. NVIDIA first introduced frame-generation with DLSS 3 with a 2x mode, dialed it up to 4x with DLSS 4, and now, users will be able to enjoy 6x the frame-generation in DLSS 4.5. The new NVIDIA DLSS 4.5 MFG enables up to 6x mode, offering [β¦]
Read full article at https://wccftech.com/nvidia-mfg-6x-mode-arrives-31st-march-007-first-light-control-resonant-dlss-4-5-path-tracing/

NVIDIA accelerates AI Video Gen in ComfyUI with a new App View feature, alongside FP4 & RTX Video Super Resolution support. NVIDIA Adds App View, FP4 & RTX Video Super Resolution Support To Comfy UI's AI Video Gen Today, NVIDIA is announcing a new app view in Comfy UI. It's a simplified panel-based interface designed to make advanced AI workflows accessible to more creators. Traditionally, Comfy UI relies on these complex graph node graphs. They're extremely powerful but intimidating for many users who are just getting started. App view hides that complexity. Creators can pick a model, set their prompts, [β¦]
Read full article at https://wccftech.com/nvidia-rtx-acceleration-comfyui-for-faster-ai-video-gen-fp4-rtx-video-super-res/

As part of its GDC 2026 news, NVIDIA also provided some updates for GeForce NOW users. Two upcoming games have been confirmed to be available for streaming on day one: Remedy's CONTROL Resonant and Samson: A Tyndalston Story from Liquid Swords. There's little surprise when it comes to the former game. Remedy has always worked closely with NVIDIA on its most recent games, Control and Alan Wake 2, adding NVIDIA RTX like DLSS and ray tracing/path tracing to the PC version and also making the games available on GeForce NOW. Indeed, today, NVIDIA confirmed that CONTROL Resonant will also support [β¦]
Read full article at https://wccftech.com/geforce-now-control-resonant-samson-vr-90-fps/

NVIDIA's CEO, Jensen Huang, has posted a rather interesting blog post about the state of the AI industry from a much broader perspective, and he has summarized it in a "five-layer" cake. Jensen Has Basically Summed Up AI Into a Five-Layer Cake, Claiming that the Opportunity Still Hasn't Been Realized There is no doubt that Team Green is one of the biggest catalysts of the ongoing AI infrastructure buildout, given that the company not only provides essential compute resources but also several other utilities to the world that we'll discuss ahead. As we move towards GTC 2026, NVIDIA's CEO recently [β¦]
Read full article at https://wccftech.com/nvidia-ceo-just-described-the-world-most-expensive-five-layer-cake/

The A18 Pro running in the MacBook NeoΒ is tailor-made for AAA titles on the iPhone 16 Pro and iPhone 16 Pro Max, which only meant that it was a matter of time before someone tried something as ridiculous as attempting Cyberpunk 2077 on Appleβs most affordable portable Mac. It turns out that one YouTuber was daring enough to attempt this experiment and, according to the content creator, the experience wasnβt terrible at all. At the absolute lowest settings possible, the MacBook Neo can achieve around 50FPS, though the relevant metrics arenβt shown in the video Right off the bat, Dave2D [β¦]
Read full article at https://wccftech.com/macbook-neo-running-cyberpunk-2077/

Some of the specifications of the upcoming Wildcat Lake CPUs have been leaked through the NBD shipping manifest. Wildcat Lake NBD Shipment Log Reveals Wildcat Lake Will be Rated at 15W, Up To 1.5 GHz of Boost Clock, and 6 MB of L3 Cache Intel's power-efficient Wildcat Lake family has been spotted in an NBD shipping manifest, unveiling some of its specifications. The series will be based on the same Cougar Cove Performance cores and Darkmont LP-E cores used in the current Panther Lake series. However, the Wildcat Lake is aimed at ultra-power-efficient devices, including laptops and mini PCs, succeeding [β¦]
Read full article at https://wccftech.com/wildcat-lake-nbd-shipment-reveals-1-5-ghz-boost-and-15w-of-tdp/

Embark Studios has pushed a new patch to ARC Raiders today, which adds a new outfit set and two new haircuts for players to customize their characters with, on top of several bug fixes. But it's not what's added with today's patch that makes the latest update from Embark significant. Last week, major server issues caused players to lose their loadouts in ARC Raiders, even after they had successfully extracted from the Rust Belt and made it safely back to Speranza. While being adamant that the studio does not normally compensate players for every incident where players lose their loot [β¦]
Read full article at https://wccftech.com/embark-is-making-an-exception-compensate-arc-raiders-players-lost-loadouts-due-to-server-issues/

Google Ads is rolling out auto end screens β a new feature that appends an interactive, auto-generated card to the end of eligible video ads to nudge viewers toward a conversion.
How it works. An interactive screen appears for a few seconds immediately after the video finishes playing.

Why we care. Advertisers no longer need to manually build post-roll calls-to-action. This feature is on by default and changes the end of your video ads β and if youβve already built custom YouTube end screens, theyβll be overridden without any warning. With end screens being the last thing a viewer sees before deciding to act, losing control of that moment matters.
The catch. Enabling auto end screens in Google Ads overrides any manually added YouTube end screens β meaning advertisers whoβve already customized their YouTube end cards will lose them.
Current limitations. The feature is only available for in-stream ads running in mobile app install campaigns, with broader expansion planned but not yet dated.
What stays the same. Auto end screens donβt affect billing or view counts β theyβre purely an added engagement layer tacked on after a full video view.
Next steps. Advertisers running mobile app install campaigns should audit their video ads now β check whether auto end screens are serving as expected and verify that any manually added YouTube end screens arenβt being silently overridden. As Google expands the feature beyond app installs, itβs worth establishing a review process early so campaigns are ready when eligibility broadens.
Dig deeper. About auto end screens for video ads
The DSCRI-ARGDW pipeline maps 10 gates between your content and an AI recommendation across two phases: infrastructure and competitive. Because confidence multiplies across the pipeline, the weakest gate is always your biggest opportunity. Here, we focus on the first five gates.
The infrastructure phase (discovery through indexing) is a sequence of absolute tests: the system either has your content, or it doesnβt. Then, as you pass through the gates, thereβs degradation.
For example, a page that canβt be rendered doesnβt get βpartially indexed,β but it may get indexed with degraded information, and every competitive gate downstream operates on whatever survived the infrastructure phase.

If the raw material is degraded, the competition in the ARGDW phase starts with a handicap that no amount of content quality can overcome.
The industry compressed these five distinct DSCRI gates into two words: βcrawl and index.β That compression hides five separate failure modes behind a single checkbox. This piece breaks the simplistic βcrawl and indexβ into five clear gates that will help you optimize significantly more effectively for the bots.
If youβre a technical SEO, you might feel you can skip this. Donβt.
Youβre probably doing 80% of what follows and missing the other 20%. The gates below provide measurable proof that your content reached the index with maximum confidence, giving it the best possible chance in the competitive ARGDW phase that follows.
The infrastructure gates are sequential dependencies: each gateβs output is the next gateβs input, and failure at any gate blocks everything downstream.Β
If your content isnβt being discovered, fixing your rendering is wasted effort, and if your content is crawled but renders poorly, every annotation downstream inherits that degradation. Better to be a straight C student than three As and an F, because the F is the gate that kills your pipeline.
The audit starts with discovery and moves forward. The temptation to jump to the gate you understand best (and for many technical SEOs, thatβs crawling) is the temptation that wastes the most money.
Discovery and crawling are well-understood, while selection is often overlooked.
Discovery is an active signal. Three mechanisms feed it:Β
The entity home website is the primary discovery anchor for pull discovery, and confidence is key. The system asks not just βdoes this URL exist?β but βdoes this URL belong to an entity I already trust?β Content without entity association arrives as an orphan, and orphans wait at the back of the queue.
The push layer (IndexNow, MCP, structured feeds) changes the economics of this gate entirely, and Iβll explain what changes when you stop waiting to be found and start pushing.
Selection is the systemβs opinion of you, expressed as crawl budget. As Microsoft Bingβs Fabrice Canel says, βLess is more for SEO. Never forget that. Less URLs to crawl, better for SEO.βΒ
The industry spent two decades believing more pages equals more traffic. In the pipeline model, the opposite is true: fewer, higher-confidence pages get crawled faster, rendered more reliably, and indexed more completely. Every low-value URL you ask the system to crawl is a vote of no confidence in your own content, and the system notices.
Not every page thatβs discovered in the pull model is selected. Canel states that the bot assesses the expected value of the destination page and will not crawl the URL if the value falls below a threshold.
Crawling is the most mature gate and the least differentiating. Server response time, robots.txt, redirect chains: solved problems with excellent tooling, and not where the wins are because you and most of your competition have been doing this for years.Β
What most practitioners miss, and whatβs worth thinking about: Canel confirmed that context from the referring page carries forward during crawling.
Your internal linking architecture isnβt just a crawl pathway (getting the bot to the page) but a context pipeline (telling the bot what to expect when it arrives), and that context influences selection and then interpretation at rendering before the rendering engine even starts.
Rendering fidelity is where the infrastructure story diverges from what the industry has been measuring.
After crawling, the bot attempts to build the full page. It sometimes executes JavaScript (donβt count on this because the bot doesnβt always invest the resources to do so), constructs the document object model (DOM), and produces the rendered DOM.
I coined the term rendering fidelity to name this variable: how much of your published content the bot actually sees after building the page. Content behind client-side rendering that the bot never executes isnβt degraded, itβs gone, and information the bot never sees canβt be recovered at any downstream gate.Β
Every annotation, every grounding decision, every display outcome depends on what survived rendering. If rendering is your weakest gate, itβs your F on the report card, and remember: everything downstream inherits that grade.
The botβs willingness to invest in rendering your page isnβt uniform. Canel confirmed that the more common a pattern is, the less friction the bot encounters.Β
Iβve reconstructed the following hierarchy from his observations. The ranking is my model. The underlying principle (pattern familiarity reduces selection, crawl, rendering, and indexing friction and processing cost) is confirmed:
| Approach | Friction level | Why |
| WordPress + Gutenberg + clean theme | Lowest | 30%+ of the web. Most common pattern. Bot has highest confidence in its own parsing. |
| Established platforms (Wix, Duda, Squarespace) | Low | Known patterns, predictable structure. Bot has learned these templates. |
| WordPress + page builders (Elementor, Divi) | Medium | Adds markup noise. Downstream processing has to work harder to find core content. |
| Bespoke code, perfect HTML5 | Medium-High | Bot does not know your code is perfect. It has to infer structure without a pattern library to validate against. |
| Bespoke code, imperfect HTML5 | High | Guessing with degraded signals. |
The critical implication, also from Canel, is that if the site isnβt important enough (low publisher entity authority), the bot may never reach rendering because the cost of parsing unfamiliar code exceeds the estimated benefit of obtaining the content. Publisher entity confidence has a huge influence on whether you get crawled and also how carefully you get rendered (and everything else downstream).
JavaScript is the most common rendering obstacle, but it isnβt the only one: missing CSS, proprietary elements, and complex third-party dependencies can all produce the same result β a bot that sees a degraded version of what a human sees, or canβt render the page at all.
Google and Bing render JavaScript. Most AI agent bots donβt. They fetch the initial HTML and work with that. The industry built on Google and Bingβs favor and assumed it was a standard.
Perplexityβs grounding fetches work primarily with server-rendered content. Smaller AI agent bots have no rendering infrastructure.
The practical consequence: a page that loads a product comparison table via JavaScript displays perfectly in a browser but renders as an empty container for a bot that doesnβt execute JS. The human sees a detailed comparison. The bot sees a div with a loading spinner.Β
The annotation system classifies the page based on an empty space where the content should be. Iβve seen this pattern repeatedly in our database: different systems see different versions of the same page because rendering fidelity varies by bot.
The traditional rendering model assumes one pathway: HTML to DOM construction. You now have two alternatives.

WebMCP, built by Google and Microsoft, gives agents direct DOM access, bypassing the traditional rendering pipeline entirely. Instead of fetching your HTML and building the page, the agent accesses a structured representation of your DOM through a protocol connection.
With WebMCP, you give yourself a huge advantage because the bot doesnβt need to execute JavaScript or guess at your layout, because the structured DOM is served directly.
Markdown for Agents uses HTTP content negotiation to serve pre-simplified content. When the bot identifies itself, the server delivers a clean markdown version instead of the full HTML page.Β
The semantic content arrives pre-stripped of everything the bot would have to remove anyway (navigation, sidebars, JavaScript widgets), which means the rendering gate is effectively skipped with zero information loss. If youβre using Cloudflare, you have an easy implementation that they launched in early 2026.
Both alternatives change the economics of rendering fidelity in the same way that structured feeds change discovery: they replace a lossy process with a clean one.Β
For non-Google bots, try this: disable JavaScript in your browser and look at your page, because what you see is what most AI agent bots see. You can fix the JavaScript issue with server-side rendering (SSR) or static site generation (SSG), so the initial HTML contains the complete semantic content regardless of whether the bot executes JavaScript.Β
But the real opportunity lies in new pathways: one architectural investment in WebMCP or Markdown for Agents, and every bot benefits regardless of its rendering capabilities.
Rendering produces a DOM. Indexing transforms that DOM into the systemβs proprietary internal format and stores it. Two things happen here that the industry has collapsed into one word.
Rendering fidelity (Gate 3) measures whether the bot saw your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away. Both losses are irreversible, but they fail differently and require different fixes.
What follows is a mechanical model Iβve reconstructed from confirmed statements by Canel and Gary Illyes.
Strip: The system removes repeating elements: navigation, header, footer, and sidebar. Canel confirmed directly that these arenβt stored per page.Β
The systemβs primary goal is to find the core content. This is why semantic HTML5 matters at a mechanical level. <nav>, <header>, <footer>, <aside>, <main>, and <article> tags tell the system where to cut. Without semantic markup, it has to guess.Β
Illyes confirmed at BrightonSEO in 2017 that finding core content at scale was one of the hardest problems they faced.
Chunk: The core content is broken into segments: text blocks, images with associated text, video, and audio. Illyes described the result as something like a folder with subfolders, each containing a typed chunk (he probably used the term βpassageβ β potato, potarto, tomato, tomarto). The page becomes a hierarchical structure of typed content blocks.
Convert: Each chunk is transformed into the systemβs proprietary internal format. This is where semantic relationships between elements are most vulnerable to loss.Β
The internal format preserves what the conversion process recognizes, and everything else is discarded.
Store: The converted chunks are stored in a hierarchical structure.Β

The individual steps are confirmed. The specific sequence and the wrapper hierarchy model are my reconstruction of how those confirmed pieces fit together.
In this model, the repeating elements stripped in the first step are not discarded but stored at the appropriate wrapper level: navigation at site level, category elements at category level. The system avoids redundancy by storing shared elements once at the highest applicable level.
Like my βDarwinism in searchβ piece from 2019, this is a well-informed, educated guess. And Iβm confident it will prove to be substantively correct.
The wrapper hierarchy changes three things you already do:
URL structure and categorization:Β Because each page inherits context from its parent category wrapper, URL structure determines what topical context every child page receives during annotation (the first gate in the phase Iβll cover in the next article: ARGDW).
A page at /seo/technical/rendering/ inherits three layers of topical context before the annotation system reads a single word. A page at /blog/post-47/ inherits one generic layer. Flat URL structures and miscategorized pages create annotation problems that might appear to be content problems.
Breadcrumbs validate that the pageβs position in the wrapper hierarchy matches the physical URL structure (i.e., match = confidence, mismatch = friction). Breadcrumbs matter even when users ignore them because theyβre a structural integrity signal for the wrapper hierarchy.
Meta descriptions: Googleβs Martin Splitt suggested in a webinar with me that the meta description is compared to the systemβs own LLM-generated summary of the page. If they match, a slight confidence boost. If they diverge, no penalty, but a missed validation opportunity.
Conversion fidelity fails when the system canβt figure out which parts of your page are core content, when your structure doesnβt chunk cleanly, or when semantic relationships fail to survive format conversion.
The critical downstream consequence that I believe almost everyone is missing: indexing and annotation are separate processes.Β
A page can be indexed but poorly annotated (stored but semantically misclassified). Iβve watched it happen in our database: a page is indexed, itβs recruited by the algorithmic trinity, and yet the entity still gets misrepresented in AI responses because the annotation was wrong.Β
The page was there. The system read it. But it read a degraded version (rendering fidelity loss at Gate 3, conversion fidelity loss at Gate 4) and filed it in the wrong drawer (annotation failure at Gate 5).
The industry built an entire sub-discipline around crawl budget. Thatβs important, but once you break the pipeline into its five DSCRI gates, you see that itβs just one piece of a larger set of parameters: every gate consumes computational resources, and the system allocates those resources based on expected return. This is my generalization of a principle Canel confirmed at the crawl level.
| Gate | Budget type | What the system asks |
| 1 (Selected) | Crawl budget | βIs this URL a candidate for fetching?β |
| 2 (Crawled) | Fetch budget | βIs this URL worth fetching?β |
| 3 (Rendered) | Render budget | βIs this page a candidate for rendering?β |
| 4 (Indexed) | Chunking/conversion budget | βIs this content worth carefully decomposing?β |
| 5 (Annotated) | Annotation budget | βIs this content worth classifying across all dimensions?β |
Each budget is governed by multiple factors:Β
The system isnβt just deciding whether to process but how much to invest. The bot may crawl you but render cheaply, render fully but chunk lazily, or chunk carefully but annotate shallowly (fewer dimensions). Degradation can occur at any gate, and the crawl budget is just one example of a general principle.
The SEO industryβs misconceptions about structured data run the full spectrum:
None of those positions is quite right.
Structured data isnβt necessary. The system can β and does β classify content without it. But itβs helpful in the same way the meta description is: it confirms what the system already suspects, reduces ambiguity, and builds confidence.
The catch, also like the meta description, is that it only works if itβs consistent with the page. Schema that contradicts the content doesnβt just fail to help: it introduces a conflict the system has to resolve, and the resolution rarely favors the markup.
When the bot crawls your page, structured data requires no rendering, interpretation, or language model to extract meaning. It arrives in the format the system already speaks: explicit entity declarations, typed relationships, and canonical identifiers.
In my model, this makes structured data the lowest-friction input the system processes, and I believe itβs processed before unstructured content because itβs machine-readable by design. Semantic HTML tells the system which parts carry the primary semantic load, and semantic structure is what survives the strip-and-chunk process best because it maps directly to the internal representation.
Schema at indexing works the same way: instead of requiring the annotation system to infer entity associations and content types from unstructured text, schema declares them explicitly, like a meta description confirming what the page summary already suggested.
The system compares, finds consistency, and confidence rises. The entire pipeline is a confidence preservation exercise: pass each gate and carry as much confidence forward as possible. Schema is one of the cleaner tools for protecting that confidence through the infrastructure phase.
That said, Canel noted that Microsoft has reduced its reliance on schema. The reasons are worth understanding:
Schemaβs value isnβt disappearing, but itβs shifting: the signal matters most where the systemβs own inference is weakest, and least where the content is already clean, well-structured, and unambiguous.
Schema and HTML5 have been part of my work since 2015, and Iβve written extensively about them over the years. But Iβve always seen structured data as one tool among many for educating the algorithms, not the answer in itself. That distinction matters enormously.
Brand is the key, and for me, always has been.
Without brand, all the structured data in the world wonβt save you. The system needs to know who you are before it can make sense of what youβre telling it about yourself.
Schema describes the entity and brand establishes that the entity is worth describing. Get that order wrong, and youβre decorating a house the system hasnβt decided to visit yet.
The practical reframe: structured data implementation belongs in the infrastructure audit, and itβs the format that makes feeds and agent data possible in the first place. But itβs a confirmation layer, not a foundation, and the system will trust its own reading over yours if the two diverge.
The multiplicative nature of the pipeline means the same logic that makes your weakest gate your biggest problem also makes gate-skipping your biggest opportunity.
If every gate attenuates confidence, removing a gate entirely doesnβt just save you from one failure mode: it removes that gateβs attenuation from the equation permanently.
To make that concrete, hereβs what the math looks like across seven approaches. The base case assumes 70% confidence at every gate, producing a 16.8% surviving signal across all five in DSCRI. Where an approach improves a gate, Iβve used 75% as the illustrative uplift.
These are invented numbers, not measurements. The point is the relative improvement, not the figures themselves.

| Approach | What changes | Entering ARGDW with |
| Pull (crawl) | Nothing | 16.8% |
| Schema markup | I β 75% | 18.0% |
| WebMCP | R skipped | 24.0% |
| IndexNow | D skipped, S β 75% | 25.7% |
| IndexNow + WebMCP | D skipped, S β 75%, R skipped | 36.8% |
| Feed (Merchant Center, Product Feed) | D, S, C, R skipped | 70.0% |
| MCP (direct agent data) | D, S, C, R, I skipped | 100% |
The infrastructure phase is pre-competitive. The annotation, recruited, grounded, displayed, and won (ARGDW) gates are where your content competes against every alternative the system has indexed. Competition is multiplicative too, so what you carry into annotation is what gets multiplied.
A brand that navigated all five DSCRI gates with 70% enters the competitive phase with 16.8% confidence intact. A brand on a feed enters with 70%. A brand on MCP enters with 100%. The competitive phase hasnβt started yet, and the gap is already that wide.
Thereβs an asymmetry worth naming here. Getting through a DSCRI gate with a strong score is largely within your control: the thresholds are technical, the failure modes are known, and the fixes have playbooks.Β
Getting through an ARGDW gate with a strong score depends on how you compare to all the alternatives in the system. The playbooks are less well developed, some donβt exist at all (annotation, for example), and you canβt control the comparison directly β you can only influence it.
Which means the confidence you carry into annotation is the only part of the competitive phase you can fully engineer in advance.
Optimizing your crawl path with schema, WebMCP, IndexNow, or combinations of all three will move the needle, and the table above shows by how much. But a feed or MCP connection changes what game youβre playing.
Every content type benefits from skipping gates, but the benefit scales with the business stakes at the end of the pipeline, and nothing has more at stake than content where the end goal is a commercial transaction.
The MCP figure represents the best case for the DSCRI phase: direct data availability bypasses all five infrastructure gates. In practice, the number of gates skipped depends on what the MCP connection provides and how the specific platform processes it. The principle holds: every gate skipped is an exclusion risk avoided and potential attenuation removed before competition starts.
A product feed is only the first rung. Andrea Volpini walked me through the full capability ladder for agent readiness:

That distinction is what I built AI assistive agent optimization (AAO) around: engineering the conditions for an agent to act on your behalf, not just mention you.
Volpiniβs ladder makes the mechanic concrete: each rung skips more gates, removes more exclusion risk, and eliminates more potential attenuation before competition starts. A brand with all three is playing a different game from a brand thatβs still waiting for a bot to crawl its product pages.
Note: Always keep this in mind when optimizing your site and content β make your content friction-free for bots and tasty for algorithms.
Five gates. Five absolute tests. Pass or fail (and a degrading signal even on pass).
The solutions are well documented:
The infrastructure phase is the only phase with a playbook, and opportunity cost is the cheapest failure pattern to fix.
But DSCRI is only half the pipeline, and itβs the easiest to deal with.
After indexing, the scoreboard turns on. The five competitive gates (ARGDW) are competitive tests: your content doesnβt just need to pass, it needs to beat the competition. What your content carries into the kickoff stage of those competitive gates is what survived DSCRI. And the entry gate to ARGDW is annotation.Β
The next piece opens annotation: the gate the industry has barely begun to address. Itβs where the system attaches sticky notes to your indexed content across 24+ dimensions, and every algorithm in the ARGDW phase uses those notes to decide what your content means, who itβs for, and whether it deserves to be recruited, grounded, displayed, and recommended.
Those sticky notes are the be-all and end-all of your competitive position, and almost nobody knows they exist.
In βHow the Bing Q&A / Featured Snippet Algorithm Works,β in a section I titled βAnnotations are key,β I explained what Ali Alvi told me on my podcast, βFabrice and his team do some really amazing work that we actually absolutely rely on.β
He went further: without Canelβs annotations, Bing couldnβt build the algos to generate Q&A at all. A senior Microsoft engineer, on the record, in plain language.
The evidence trail has been there for six years. That, for me, makes annotation the biggest untapped opportunity in search, assistive, and agential optimization right now.
This is the third piece in my AI authority series.Β

When people speak naturally, their language flows. Itβs often messy, incomplete, and not especially coherent. The Google search bar, however, required something different. Users had to compress their needs into short phrases or slightly longer queries β whatβs traditionally classified as short-tail or long-tail.
To make that work, users stacked queries across a journey, moving through a funnel from A to B and refining as they went. In the process, users often stripped out personalized nuance to match what they believed the search engine could understand. In response, SEO professionals built systems around that constraint, grouping queries by search volume, categorizing them by a limited set of intents, and measuring competitiveness.
That dynamic is changing. SEOs need to understand the behavioral change thatβs emerging. Google is promoting Gemini, and phone manufacturers like Samsung are marketing AI-enabled features as product USPs. Alongside this product marketing, thereβs also a level of education happening. Users are being encouraged to be more expressive with their queries, personalize their searches, and describe what theyβre looking for in greater depth.

This is where we need to move away from the notion of keyword research to prompt research. Keyword research traditionally assumes that demand can be quantified, that variations can be listed and grouped, and that optimization happens at a phrase level or a cluster level. In the new hybrid AI and organic search world, demand is much more of a generative concept. Prompts can be written in countless ways while preserving the same underlying need.Β
This doesnβt make keyword research obsolete, but it does change its focus. Instead of extracting keywords from tools as weβve done, we also need to start understanding and modeling journeys. Instead of grouping by volume alone, we need to group by decision stage and the type and level of uncertainty the user has.
The output of this process isnβt simply a keyword map, but a task map that accurately reflects the real pressures and constraints experienced by the audience. This is an evolution from short-tail and long-tail keyword research to an infinite tail of prompt research.
Dig deeper: Why AI optimization is just long-tail SEO done right
You can describe the infinite tail as an expansion of the long tail. But that underestimates whatβs actually changing. Itβs not just about more niche phrases or longer query strings. Itβs about the level of personalization thatβs been layered into each request.
As users add context, constraints, and preferences, prompts become unique combinations of a multitude of factors. The number of possible combinations effectively becomes infinite, even if the underlying tasks remain finite. AI systems respond by evaluating the given prompts and probabilistically predicting the next tokens rather than using exact-match strings.
Itβs less about how you rank for a specific keyword or whether youβre visible in AI for a specific phrase. It becomes whether your content has the highest probability of satisfying the situation being described. Thatβs a different optimization problem altogether. Youβre not competing on phrasing. Youβre competing on task completion.
This part of the journey is where βfuzzy searchesβ happen, meaning the path isnβt a straight line. Success isnβt just about finishing a task. Itβs about making sure the user actually found what they were looking for. Since every user moves differently, the process is flexible rather than a set of rigid steps.
Dig deeper: From search to answer engines: How to optimize for the next era of discovery
One of the most important mechanics in AI search is query fan-out. When a complex prompt is submitted, the system doesnβt treat it as a single string. Instead, it decomposes a request into a network of subquestions, classifications, and checks that together form a broader evaluation framework.
From an SEO perspective, this means your content moves beyond evaluation against a single phrase or specific document matches. Instead, itβs assessed across a network of related questions, with a collective determination of whether it can satisfy a broader task.Β
In a fan-out world, you win by supporting the entire decision cluster that surrounds that term. If your content addresses only one narrow dimension of the task, it becomes fragile. If it supports multiple layers of the decision, it becomes resilient. Fan-out rewards structural coverage and contextual relevance rather than repetition of specific phrases.
Grounding queries help provide the LLM with a level of confidence through its fan-created queries. AI systems generate answers and attempt to validate them.
Theyβre used to check whether a proposed answer is supported elsewhere, whether claims are consistent across sources, and whether the entity behind the information is reputable. If an AI system includes your brand in a summarized response, it needs a level of confidence to defend it virtually if challenged by alternative information.
This changes the meaning of authority. In traditional SEO, ranking could be achieved through technical content, links, and other forms of manipulation. In AI search, selection also depends on how easily your content can be corroborated against a broader consensus within the cohort. This can involve factors tied to entity clarity, including structure, data consistency, consistent messaging, and external validation. These signals reduce uncertainty for the system. Youβre not just trying to appear. Youβre trying to be selected and defended.
Dig deeper: The authority era: How AI is reshaping what ranks in search
Organic search isnβt disappearing. Ranking still influences discovery, technical SEO still shapes crawlability, and architecture still determines how well a site and its content are understood.Β
But now, AI layers sit on top, synthesizing information and influencing which brands are surfaced within conversational responses. In this hybrid environment, organic visibility feeds AI selection. They arenβt exclusive, and yet they arenβt codependent.Β
AI selection can reinforce brand perception, and fan-out rewards depth of current coverage. Grounding then rewards trust and consistency. This is where the infinite tail rewards genuine audience understanding and the creation of websites and content systems that support it.
This is a shift from keyword research to prompt research, and not just a cosmetic renaming of the process. Success will depend on understanding why people search, the decisions theyβre making, the uncertainties they face, and the evidence they need before committing. Search increasingly revolves around satisfying situations rather than matching strings. Designing for the infinite tail means designing for people and the tasks theyβre trying to complete.
Dig deeper: How to use AI response patterns to build better content

Capcom has confirmed that Resident Evil Requiem Story DLC is on the way Resident Evil Requiem has become a huge success for Capcom, having already sold over 5 million copies across all platforms. In a post shared today, Requiemβs game director, Koshi Nakanishi, confirmed new add-ons, including a Photo Mode and a βnew minigameβ. Furthermore, [β¦]
The post Resident Evil Requiem Story DLC and update content confirmed appeared first on OC3D.
Apparently, Epic Games hasn't been making enough money on Fortnite, and needs to raise the price of its premium in-game currency, V-Bucks, to "help pay the bills." "The cost of running Fortnite has gone up a lot and weβre raising prices to help pay the bills." That's what Epic tells players right at the top of a new blog post, which announces that the base price of the premium currency will be going up. The change goes into effect on March 19, 2026, after which every dollar you spend on V-Bucks will earn you less than it did before. After [β¦]
Read full article at https://wccftech.com/fortnite-v-bucks-price-increase-march-2026-epic-games/

The comic book series turned Emmy award-winning TV series, The Boys, is making its first foray into video games later this month. Initially announced back in December 2025, The Boys: Trigger Warning is a VR adventure from publisher Sony Pictures VR and developer ARVORE, and it'll arrive on Meta Quest 3 headsets on March 26, 2026. The news comes with a new release date trailer, which shows off more gameplay compared to what we saw in December and includes a few more characters from the show that'll be featured in the game. It doesn't, however, confirm if any more of [β¦]
Read full article at https://wccftech.com/the-boys-trigger-warning-release-date-meta-quest-3-psvr2-sony-pictures-vr/

βContent is kingβ remains one of the most widely accepted ideas in SEO. Not everyone has agreed. Different schools of thought have always existed, with some practitioners prioritizing backlinks and others focusing on technical SEO.
Content is often treated as the primary driver of search visibility. Iβm not arguing that.
My point is simpler: if youβve relied on content to drive results β and earn a living β you should start doubling down on distribution.
With AI search changing the game, creating great content (and, yes, building some backlinks) is no longer enough to get it seen. The more important question may no longer be βWhat should I write next?β but βWhere should I push this next?β
Content distribution has become far more important in recent years, especially as audiences spread across more online spaces. In many teams, this job was usually outsourced to someone other than SEOs:
Sure, distribution held some value to SEO, but it was generally considered more beneficial to other functions.
Thanks to AI search, itβs finally landed squarely on our plate. Since AI models have fragmented search to an unprecedented level, distribution is now key to meaningful SEO outcomes.
There are three key drivers behind this change:
If this all sounds a bit abstract, letβs briefly dig into the evidence and explain whatβs really going on.
Search is fragmenting as people use a wider range of tools. Ideally, one strategy would work everywhere, but research shows thatβs not the case.
AI search tools cite different sources, a 2025 Search Atlas study found. Some show significantly more overlap with the SERPs than others. This indicates that different tools follow different sourcing logic. And as long as thatβs true, optimizing for one wonβt necessarily boost visibility on another.
The whole thing is even trickier because users seem more open to switching tools than before. Gemini may soon surpass formerly unrivaled ChatGPT in traffic share, according to Similarweb. That could change again quickly.
Thinking thereβs a single clear winner, like Google used to be, would be wrong. Focusing on the most popular tool at the moment isnβt a guaranteed strategy.
To maximize visibility, we need to consider how multiple AI tools source their information, which implies our distribution strategy needs to be broad.
Dig deeper: Tracking AI search citations: Whoβs winning across 11 industries
The Search Atlas study showed that some AI search tools overlap with Google more than others β but in all cases, the overlap is pretty low. Perplexity ranked the highest at 43%, while ChatGPT barely hit 21%.
Characterizing Web Search in The Age of Generative AI (PDF) explicitly finds that AI search tools draw from a much wider pool of sources and are more likely to cite sites with fewer visits than traditional search engines.
This shows us that fragmentation is compounding. The pool of potential sources is wider, with little overlap among AI tools or between AI and traditional search.
The most problematic factor out of all, though, is that the sourcing logic of one tool can and often does change, too. This leads to different domains getting cited for the same prompts at different points in time β a phenomenon called citation drift.
Citation drift is more frequent than we might assume. Over the course of just a month, for instance, AI tools change approximately 40-60% of the domains they cite for the same prompt, according to Profound.
In other words, one domain could appear several times in a single response, then disappear completely the following month. This flip-flopping gets even worse over longer periods. For example, Profoundβs study also showed that, from January to July, as many as 70% to 90% of the domains cited for the same prompt had changed.
Search is fragmented across tools and time. As cited domains change more frequently, users see more sources, making it even harder for you to push your brand to the front.
So, what can we do about it? How should we approach this increasing fragmentation of search?
While this might change as new tools and strategies emerge, the best answer we have so far is this: focus on broad, multi-channel distribution.
When you canβt reliably predict which sources will be used, the best strategy is to widen your footprint. This creates more potential entry points into AI systemsβ training and retrieval layers.
Distribution also matters for another reason. AI tools often prefer third-party sources over branded domains, according to an AirOps study.
This will require some serious shifts in how many SEOs approach their work. Here are a few you can implement right away.
Youβre unlikely to win fragmented AI search on your own. Optimizing for it now takes a much broader approach than before, pulling in digital PR, social media, community management, and other functions.
Those areas require skills many SEOs donβt have. Those who do still have only 24 hours in a day, so spreading that work across multiple disciplines isnβt realistic.
This only works with a team. You might hate that idea, especially because it means giving up full control of your projects and results. I get it, but thatβs the reality right now. Youβll have to let some things go, trust others to handle them, and divide responsibilities. In other words, youβll need to collaborate efficiently.
Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts
Even if you let experts handle certain tasks, youβll still need at least a surface-level understanding of other disciplines becoming central to search.
SEOs will still own at least parts of distribution, whether that means handling the high-level strategy or downright executing it on specific channels.
In either case, doing this well requires skills you may not have used much before. So nowβs the time to develop them.
That could mean learning more about digital PR, partnerships, thought leadership, syndication, community presence, or something else. With so many possibilities, it helps to start with the area you feel most comfortable with or most drawn to at the moment.
You also need to change how you think about SEO, and then translate that shift into actual workflows. Google is still a major traffic driver, and rankings still matter. But for a fragmented, AI-driven search, obsessing over rank wonβt cut it.
Instead of asking, βHow do I get this content to rank?β You now need to ask, βHow do I get this content into as many places as possible?β
Again, the goal is to create multiple entry points across AI systems, platforms, and audiences, increasing the chances of your content getting discovered, cited, and surfaced.
Thatβs why itβs important to start thinking more about overall presence across ecosystems rather than just positions in specific search engines.
If youβve successfully shifted your mindset from ranking to presence, itβs time to build a workflow that reflects that change.
I know firsthand how easy it is to forget about distribution, especially if it wasnβt part of your process before. To make it stick, you need to redesign your workflow to position distribution at the core.
A good place to start is by adding a launch phase, where content is distributed immediately upon publishing. After that, you could include a recurring phase every few months to ensure you regularly refresh and redistribute content.
Define reusable details upfront, like which channels youβll consistently target and who owns each one. That way, youβll minimize planning from scratch and make sure nothing falls through the cracks.
Dig deeper: Content marketing in an AI era: From SEO volume to brand fame
Finally, if you want some easy tactics to immediately add to your to-do list, consider these:
Track, optimize, and win in Google and AI search from one platform.
The shifts are large enough that youβll need to rethink how you do SEO. As search fragments, the work itself will have to evolve.
The approaches and workflows you relied on in the past wonβt translate cleanly into a landscape shaped by multiple AI tools, changing sourcing logic, and constantly shifting citations.
These processes will also become more complex because they require closer collaboration with other teams. Distribution now intersects with digital PR, social media, partnerships, and community management, making cross-team coordination more important than before.
Thereβs a long road ahead. The best way to keep your sanity is to start small: focus on manageable steps, take them one at a time, and build from there.
Nvidiaβs GeForce On Community Update is about to start At 8 am PT, 3 PM GMT, Nvidia will start its newest βGeForce On Community Updateβ at GDC 2026. During this event, Nvidia plans to show gamers the latest RTX games and the newest RTX platform features in action. Remedy has confirmed that Control Resonant will [β¦]
The post Watch Nvidiaβs GDC 2026 GeForce On Community Update here appeared first on OC3D.
MSI unleases a wave of Frieren: Beyond Journeyβs End PC hardware, including a custom graphics card MSI has unveiled an official lineup of new βFrieren: Beyond Journeyβs Endβ PC parts and accessories. These limited edition products celebrate the hit anime with custom artwork and iconography. These parts symbolise the relationships between Frieren and her companions, [β¦]
The post MSI unveils Limited Edition Frieren: Beyond Journeyβs End PC hardware appeared first on OC3D.

MarketingSoda grades and repairs your HubSpot database so teams can trust their records. It connects via OAuth, scans contacts and companies, and assigns an AβF score based on completeness, accuracy, freshness, consistency, and uniqueness. From the dashboard, you can trigger enrichment, standardization, validation, and deduplication, review conflicts, and track freshness trends.
It also embeds data quality scores inside HubSpot contact sidebars, allowing reps to see grades, freshness, and confidence and launch fixes without leaving the CRM.
Resident Evil Requiem is getting additional content soon, including an unspecified surprise coming in May that sounds a lot like the engaging Mercenaries mode features in past entries in the series, as confirmed in a new video message by director Koshi Nakanishi. But that is not all: the rumored story expansion has also been confirmed to be in the works. In today's video message, Nakanishi-san commented on the current state of the game while thanking players for their support. "We released an update the other day to fix a variety of issues, and we will continue to address any other [β¦]
Read full article at https://wccftech.com/resident-evil-requiem-story-expansion-content-may/

The first batch of gaming benchmarks to test Appleβs latest and greatest M5 MaxΒ is here, and what better way to kick things off than to fire up Cyberpunk 2077, which is a title that has been ported to the macOS platform. Even though the technology giant has switched to a superior Fusion Architecture for both the M5 Pro and M5 Max, the maximum GPU core count is limited to 40. While we should still witness a slight framerate boost, whatβs disappointing to see is that our laptop RTX 4090Β ends up being 63 percent faster. Newest Cyberpunk 2077 benchmark comparison also [β¦]
Read full article at https://wccftech.com/laptop-rtx-4090-tested-against-m5-max-macbook-in-cyberpunk-2077-massive-fps-difference/

Fallout 3 Remastered is one of gaming's worst-kept secrets. While the game has yet to be announced, a new McFarlane toy listing spotted on the series subreddit leaves little doubt that the iconic third entry in the series is indeed coming at some point. The listing spotted by the Fallout subreddit, originally shared by Toy News International, features, among DC and Marvel toys, an "ELITE EDITION 7IN β FALLOUT 3 REMASTERED β #13 T-45B NUKA COLA." As noted on the subreddit, "while there's no confirmation from McFarlane themselves, this comes from a new set of toy listings via an online [β¦]
Read full article at https://wccftech.com/fallout-3-remastered-listing-worst-kept-secret/

The ASRock X870E Taichi OCF remains one of the best motherboards for enthusiasts, looking to break world records. Overclocker "Alex2305" Scores 14,290 Marks in PCMark 10 Express Using Ryzen 9 9950X3D on ASRock X870E Taichi OCF Another overclocker has made a new world record by using ASRock's popular flagship AM5 motherboard. The user "Alex2305" just broke all previous records for the highest scores in PCMark 10 Express, a popular benchmarking tool for CPUs. Alex2305 used AMD's flagship Ryzen 9 9950X3D CPU on the ASRock X870E Taichi OCF, which we have declared as one of the best AM5 motherboards for overclocking. [β¦]
Read full article at https://wccftech.com/new-world-record-in-pcmark-10-express-with-amd-ryzen-9-9950x3d-on-asrock-x870e-taichi-ocf/

If youβve been in marketing long enough, youβve probably lived through a few identity crises. First, we were channel experts. Then, we became integrated marketers, growth marketers, and performance marketers. Somewhere along the way, someone added βAIβ to everyoneβs job description and called it a day.
Now, weβre entering the era of the full-stack marketer. From where I sit β particularly as a media leader β the role is starting to look a lot like product management.
This doesnβt mean you need to start writing Jira tickets for fun (though some of you already do). It means that tomorrowβs most effective media leaders wonβt just optimize campaigns. Theyβll own outcomes, connect dots across teams, and think holistically about the entire user experience, from first impression to final conversion (and beyond).
Iβve seen this shift most clearly in industries with long consideration cycles, multiple stakeholders, and rising acquisition costs β where marketing performance is inseparable from the experience itself.
Letβs break down whatβs driving the rise of the full-stack marketer, what it really means to βthink like a product manager,β and why this mindset is becoming non-negotiable for media leaders.
A full-stack marketer isnβt someone who does everything (burnout isnβt a job requirement). Instead, itβs someone who understands how everything works together.
Over the course of my career, Iβve learned that the most impactful media decisions rarely come from being the deepest expert in one area. They come from having working fluency across many:
The full-stack marketer doesnβt need to be the deepest expert in every area, but they do need to know enough to connect insights, spot gaps, and make informed trade-offs. In practice, this means constantly zooming out to see the system and zooming back in when something breaks.
Earlier in my career, media leadership was often defined by questions like:
Those questions still matter. I ask them all the time. But over the years, Iβve learned theyβre no longer sufficient on their own. Todayβs environment forces media leaders to grapple with bigger, messier questions:
These are product questions. Product managers obsess over the end-to-end experience: the user journey, friction points, trade-offs, and outcomes. Media leaders who adopt this mindset stop seeing campaigns as isolated efforts and start seeing them as inputs into a broader system.
In many of the industries Iβve worked in, that system is anything but simple.
Dig deeper: Why PPC teams are becoming data teams
Marketing performance rarely exists in isolation. In many industries (especially those with longer decision cycles), a click is just the beginning, not the win.Β
Whether youβre selling financial services, healthcare, or education, prospects move through nonlinear journeys influenced by multiple touchpoints, stakeholders, and moments of friction. This is where full-stack thinking becomes critical.
Iβve lost count of how many times Iβve heard this reaction when performance starts slipping: βThe platform is getting more expensive.β
Sometimes thatβs true. But a product-minded media leader asks deeper questions:
Across industries, Iβve repeatedly seen strong intent at the keyword or audience level, healthy CTRs, and solid landing-page engagement followed by a steep drop-off at the point of conversion. Itβs a product experience problem.
In higher ed, this often shows up when high-intent program traffic is routed to lengthy or confusing application flows, generic inquiry forms, or experiences that donβt match the promise of the ad, especially on mobile. Prospective students signal strong intent, only to hit friction that has nothing to do with media and everything to do with the experience theyβre asked to navigate.
A full-stack marketer doesnβt just flag this: they bring data, partner cross-functionally, and help prioritize fixes based on impact.
One of the most important product principles is that not all users are the same, and they shouldnβt be treated that way.
Many organizations market to multiple audiences at once, each with different motivations, risk tolerance, and timelines. Treating them as if theyβre buying the same βthingβ is a fast track to average results.
A product-minded media leader understands that:
Iβve seen this clearly in healthcare, where patients, caregivers, and referring providers evaluate the same organization through entirely different lenses. Financial services presents a similar challenge, with banking, investment, and insurance decisions varying dramatically by life stage and goals.
Full-stack marketers adapt media strategy accordingly, from channel mix to messaging to measurement. This is because they understand product-market fit, not just audience targeting.
One of the biggest blind spots in media strategy is what happens after someone converts. Product thinkers ask:
Iβve seen performance improve without changing media at all, simply by improving speed-to-lead or aligning follow-up messaging with campaign intent.
Healthcare offers especially clear examples of this dynamic due to intake workflows, appointment scheduling, and care coordination, but the principle is universal: media doesnβt end at the form fill. The full-stack marketer is accountable for conversions and outcomes.
Dig deeper: What AI means for paid media, user behavior, and brand visibility
Another hallmark of product management is roadmap thinking: prioritizing initiatives based on impact, effort, and sequencing. Full-stack media leaders bring this same approach to marketing:
In practice, this might look like:
Instead of chasing the βnext shiny channel,β full-stack marketers focus on compounding gains.
Product managers donβt just look at metrics. They interrogate them. The same should be true for media leaders. Instead of asking, βWhatβs the CPA?β Iβve learned to ask:
In higher ed, this might mean:
Data becomes a tool for decision-making.
Full-stack marketers are inherently collaborative because they have to be. In higher ed, success often requires alignment across:
Media leaders who think like product managers donβt just execute requests. They help stakeholders understand trade-offs, prioritize initiatives, and rally around shared goals. They also translate data into stories people can act on.
Dig deeper: Break down data silos: How integrated analytics reveals marketing impact
The rise of the full-stack marketer doesnβt mean specialization is dead. It means seeing the entire system matters more than optimizing any single piece of it.
From my perspective, tomorrowβs strongest media leaders will:
In categories where trust, timing, and transformation are at the core of the βproduct,β this mindset is no longer optional.
At its heart, marketing here is more than campaigns. Itβs guiding life-changing choices. If youβre a media leader feeling like your role is expanding faster than your job description β congratulations! Youβre not losing focus. Youβre evolving.

Buying AI capabilities to drive marketing is easy. Enabling marketing teams to actually use it independently, decisively, and at scale is far harder.
The main culprit? Humans.
Marketing teams have always had the same elusive goal: to move at the pace of the consumer. Responding to each customerβs needs in real time, delivering the relevant message at the right moment, and optimizing customer lifetime value to drive loyalty and ROI. The goal is not new.
What is perpetually new are the AI technologies available to analyze consumer data and generate instant, personalized messaging at scale. But while technology evolves rapidly, the ability of marketing teams to harness it independently and decisively has not kept pace. The main obstacle is organizational: most marketing teams have not structured themselves to extract full value from the technology they already have.
This is not to say that there is no progress.Β There is. Marketing teams that have crossed that chasm are seeing extraordinary results.
One case in point is Caesars Entertainment that reduced campaign execution time from five days to five minutes. Asadul Shah, vice president of player revenue Strategy, called it βa massive game changer.β
Before that transformation, Caesars marketers manually built targeting lists across disconnected systems, coordinated across multiple tools and waited on engineers, analysts and creative teams before anything could go out. The result was an operation too slow to target players with the precision and timing the market demanded.
Caesars worked with Optimove to consolidate data, orchestration and execution in one platform. Shah noted the transformation made marketing βnot just more efficient; it is more responsive to what our players actually need in the moment.β
What made it work was not technology alone. Caesars implemented Positionless Marketing, a framework that frees marketing teams from fixed roles, giving every marketer the power to execute any task instantly and independently. Optimove provided the platform. Caesars built the team structure to make it real. Technology and human ingenuity working together making Positionless Marketing possible.
Any organization achieving this kind of transformation is doing what McKinsey calls βorganizing to value,β a fundamental rethink of structure, decision-making and accountability that turns a marketing team into an operation built to drive value continuously. For marketing, that means becoming a Positionless team that optimizes customer lifetime value, drives loyalty and delivers measurable ROI.Below, we use McKinseyβs Organize to Value framework to outline the pitfalls that block Positionless Marketing and the blueprint to build teams that can execute any marketing task, instantly and independently.
McKinsey has identified six core problems preventing marketing teams from successfully evolving into the Positionless model. Of these, only one is about technology. All the others are about how leaders and teams are getting in their own way.
These are the realities of assembly-line marketing operations β not Positionless ones. Insights live with analysts. Creativity lives with designers. Activation lives with engineers. Value disappears in the spaces between them.The assembly line was built for control. It was never built to deliver value.
Assembly-line marketing is counter to what Peter Drucker, the father of modern management, said: βThe purpose of business is to create and keep a customer.β
McKinseyβs βOrganize to Valueβ blueprint proposes a fundamental shift: design organizations around value creation, clear outcomes, impact over job titles and minimal friction execution. It provides the foundation to become Positionless and build the conditions for marketing teams to keep customers for life.
To make Positionless Marketing a reality, marketing leaders should focus on pragmatic application and the aspects that most influence marketing execution.
These changes require sustained commitment. But the alternative (an assembly-line structure that was never built to deliver customer value) is far costlier than the transformation itself.
The results speak for themselves. In addition to Caesars:
The technology and AI tools are here and ever evolving. Today, AI generates infinite creative variants. Data platforms surface real-time behavioral signals. Decisioning engines coordinate across channels instantly.
But technology layered on top of an assembly-line structure creates the illusion of progress. The same handoffs happen. The same approvals add the same delays. Speed arrives at the edge; the bottleneck stays in the middle.
External pressures are accelerating. Customers expect personalization and the best experience across all channels. Competition is rising and growing more complex.
Marketing leaders who wait for transformation will find their competitors have already made it. The ones moving first are pulling ahead.
McKinsey confirms what the best marketing teams already know: the right structure and technology unleash human potential β and vice versa. Smart people trapped in the wrong system will still underperform. The best AI tools in the world wonβt deliver results when constrained by the wrong organization.
McKinseyβs blueprint is pointing out the way. Positionless Marketing is the destination.



Remedy plans to showcase Control Resonant at GeForce On Remedy has confirmed that it plans to showcase Control Resonant at Nvidiaβs GDC 2026 βGeForce On Community Updateβ. Control Resonant is Remedyβs sequel to Control, which revisits the series with Dylan Faden as the gameβs protagonist. Dylan Faden is the brother of Controlβs protagonist, Jessie Faden. [β¦]
The post Control Resonant will be at Nvidia GeForce On appeared first on OC3D.

If Windows 11 feels cluttered with AI features, ads, and background services, these popular free tools help strip out the bloat, disable telemetry, and give you more control over the OS.
Hitoo is a live voice-translation video calling platform that lets you speak any language in real time. It preserves your voice identity with sub-300ms latency, supports over 50 languages, and delivers HD video with end-to-end encryption. Use multilingual chat with automatic translation, host group calls for up to 50 participants, and connect without plugins. Businesses use it to remove language barriers and keep conversations natural and secure.
The PC components keep getting expensive, directly affecting the laptop costs, and could introduce a nearly 40% price hike. TrendForce Forecasts 40% Price Hike for Mainstream Notebooks as Memory and CPU Availability and Pricing Worsen While desktop memory prices seem to have somewhat stabilized in some parts of the world, we have seen a rapid surge in laptop memory costs. Coupled with the increasing prices of SSDs, we have been witnessing a significant rise in laptop prices, and it looks like it's about to get worse. According to a new report, the mainstream notebook prices can rise by nearly 40% [β¦]
Read full article at https://wccftech.com/laptop-prices-expected-to-rise-by-40-as-memory-and-cpu-prices-continue-to-soar/

Not even a couple of weeks after the New York state's Attorney General announced a lawsuit against Valve for their gambling-like loot boxes in games such as Counter-Strike 2, the Steam owner is about to be hit by another lawsuit, this time on the other side of the "pond" (where it already faces a Β£656 million class-action lawsuit directed to the 30% fee it collects from each game and software distributed on the platform). The Performing Rights Society (PRS), a UK-based music licensing and royalty collection organization that works on behalf of songwriters, composers, and music publishers whenever their music [β¦]
Read full article at https://wccftech.com/valve-hit-another-lawsuit-performing-rights-society/

By all accounts, Nioh 3 is a great success for Team Ninja and Koei Tecmo. It was lavished with praise by critics (such as our own Francesco De Meo, who rated it 9.8/10 and called it the best game the Japanese studio has ever made) and fans alike, as confirmed by the fact that it's the fastest-selling game in the trilogy so far. That said, the development team is aware there's further room for improvement. Fumihiko Yasuda, producer for Nioh 3 and studio head of Team Ninja, confirmed as much in a candid interview with Gamesradar: Nioh 3 is a [β¦]
Read full article at https://wccftech.com/nioh-3-great-but-not-perfect-lots-we-can-improve-upon-says-team-ninja/

Nvidia RTX 5050 9GB spotted in shipping manifest β Specifications Confirmed The renowned hardware leaker kopite7kimi has unveiled the specifications of Nvidiaβs RTX 5050 9GB, an upgraded version of the companyβs RTX 5050 8GB. This GPU uses 3GB GDDR7 chips instead of 2GB GDDR6 chips, which offers more memory bandwidth and capacity per chip. This [β¦]
The post RTX 5050 9GB GDDR7 GPU Specifications leak appeared first on OC3D.
DeeeVee helps parents turn bedtime into a meaningful daily ritual. The app generates short bedtime stories narrated in a parentβs or grandparentβs voice, making stories feel personal while helping children learn values like courage, kindness, honesty, and empathy.
Each story includes reflection questions that spark parent-child conversations and supports 77 languages so multilingual families can preserve their native language. DeeeVee helps parents make bedtime calmer, more engaging, and meaningful in just a few minutes each night.
Independent Polish game developer Bloober Team has recently updated its investors on the studio's upcoming lineup, including the freshly announced Layers of Fear 3. The game was teased in mid-February during the franchise's tenth anniversary livestream. Now, though, we have learned that the game won't be developed by one of the internal Bloober Teams; the studio will instead work alongside a partner (although it will still provide significant input, being the franchise holder and creator). The studio was not mentioned explicitly, but odds are they might be fellow Polish studio Anshar, which already worked on Layers of Fear, the "collection" [β¦]
Read full article at https://wccftech.com/layers-of-fear-3-not-develoepd-bloober-cronos-team-new-ip/

This morning, KONAMI Digital Entertainment announced that Metal Gear Solid Delta: Snake Eater has surpassed two million units sold as of February 17, 2026, as shown in the celebratory screenshot. The game became a million seller on day one when it launched on Thursday, August 28, merely on the heels of pre-orders. The game took 173 days to get to the second million, though, which isn't great. Still, the fact that KONAMI bothered to send out a press release about it means they must be reasonably happy with the figure. It all depends on the budget of the game; we [β¦]
Read full article at https://wccftech.com/metal-gear-solid-delta-snake-eater-2-million-units/



FUSE is the operating system for growth decisions. With $790B in digital ad spend yearly, allocation decisions remain fragmented and rely on manual interpretation. FUSE verifies live performance data and decides how ad budgets should move before spending.
Instantly connect all your live data sources like GA4, Google Ads, Meta Ads, TikTok, and LinkedIn, and watch FUSE run cross-channel calculations to optimize your marketing budgets without errors.
NVIDIA's GeForce RTX 5050 9 GB graphics card specs have been revealed, revealing similar core specs & updated memory config. NVIDIA Goes With Same Core Specs But Updated Memory Specs On Its Upcoming GeForce RTX 5050 9 GB Graphics Card So last week, we reported that NVIDIA was preparing a new variant of the GeForce RTX 5050 graphics card with 9 GB of memory. The rumored variant was later reported to launch around Computex 2026, and it would be aimed at a similar entry-level price point. Now, Kopite7kimi has revealed the exact GPU and Memory configuration used by the NVIDIA [β¦]
Read full article at https://wccftech.com/nvidia-geforce-rtx-5050-9-gb-specs-leak-gb206-gpu-2560-cores-gddr7-130w/

SK hynix has announced the successful development of its LPDDR6 memory utilizing the 1c process node, offering up to 10.7 Gbps speeds. SK hynix LPDDR6 Memory Delivers 33% Faster Speeds While Saving More Than 20% Power Press Release: SK hynix announced that it has successfully developed a 16Gb LPDDR6Β DRAM based on the sixth-generation 10nm-class (1c) process technology. After unveiling the product at CES last January, the company recently completed the worldβs first validation of 1c LPDDR6 development. SK hynix plans to complete preparations for mass production within the first half of the year and begin supplying the product in the [β¦]
Read full article at https://wccftech.com/sk-hynix-develops-lpddr6-memory-1c-node-16gb-density-10-7-gbps/

Play Apple Music instantly by tapping NFC tags
Premium sponsors, ready when you are
Finally read the books you buy
Generate PRDs, specs and wireframes your AI understands.
Real Expressive AI Voices
Local server mocks in macOS, all in a lightweight native app
Dual-brain edge AI computer by Qualcomm and Arduino
Find skills for Claude Code, Cursor, Copilot & more
Build web/mobile apps, sites and extensions by talking to AI
Give your AI agent design taste + prevent generic AI design
The open source bug reporting and feedback tool
Manage a team of AI agents that do real work
Type a prompt to generate any editable social media graphic
Nobody tells you what you can ask AI to build
Real developers help vibecoders with AI-built apps
Sequences, infrastructure, deliverability. One product.
AI-first platform for building commerce stores, fast
AI presentations without the AI slop
Multi-agent review catching bugs early in AI-generated code
Marketing suite for the agentic web
Translate text in your videos without recreating visuals
The AI that fixes prod autonomously
Find product gaps & build from bad reviews
Quit all running Mac apps in one click from your menu bar