Google AI Overviews: Gemini 3, follow-up questions, AI Plus, and Google Photos photo-to-video
Introduction TL;DR Google AI Overviews is moving to a more conversational search flow by adopting Gemini 3 and enabling follow-up questions that bridge into AI Mode. Google AI Plus launches at $7.99/month in the U.S. and expands to 35 new countries/territories, bundling Gemini 3 Pro, Flow, NotebookLM, 200GB storage, and family sharing. Google Photos introduces prompt-based photo-to-video transformation, but it’s gated by age, account type, backups, and daily limits. In this post, Google AI Overviews is the anchor: what changed, how the user journey shifts, and what individuals and organizations should do next. ...
Grok image generation: why digital undressing and CSAM risks keep resurfacing
Introduction TL;DR: Grok image generation became a high-profile example of how “digital undressing” (nudification) and CSAM-adjacent risks can scale fast when person-image editing, virality defaults, and monetization intersect. Context: Regulators (EU DSA) and national authorities are now treating this as a systemic risk management problem, not just “bad content.” Definitions and scope One-sentence definition Digital undressing is the misuse of generative image tools to create nonconsensual sexualized imagery of identifiable people (nudification). ...
NVIDIA Earth-2: Open Models for 15-Day Forecasts and Severe Storm Nowcasting
Introduction TL;DR: NVIDIA Earth-2 is an open “weather AI stack” spanning data assimilation (HealDA), medium-range forecasting (Atlas), and nowcasting (StormScope), supported by Earth2Studio and PhysicsNeMo. NVIDIA Earth-2 appears designed to lower time-to-PoC for meteorology services and decision-heavy industries (insurance, energy) by shipping models and workflow tooling together. Why it matters: Weather AI adoption fails less on model quality and more on reproducibility, licensing, validation, and operational controls. What NVIDIA released in the Earth-2 family Three model lines: Atlas, StormScope, HealDA Earth-2 Medium Range (Atlas) targets global 15-day forecasts and 70+ variables. Earth-2 Nowcasting (StormScope) targets kilometer-scale severe weather prediction over 0–6 hours. Earth-2 Global Data Assimilation (HealDA) is positioned to generate initial conditions for forecasting workflows. Open tooling: Earth2Studio and PhysicsNeMo Earth2Studio is a Python package for building inference pipelines; docs warn that base installs may not cover all optional capabilities. PhysicsNeMo is positioned as the open framework for training/fine-tuning. Why it matters: Shipping the stack (not just a model) is what enables real integration into risk and operations pipelines. ...
Gemini Personal Intelligence: What changes when Gmail and Photos power your AI
Introduction TL;DR Gemini Personal Intelligence lets Gemini use context from connected Google apps (notably Gmail and Photos) to generate more tailored answers. It’s opt-in and rolling out first to U.S. Google AI Pro/Ultra users (as of 2026-01-26). The trade-off is real: better personalization, but lingering detail errors and “over-personalization.” Why it matters: Personalization at LLM scale isn’t a UI tweak. It changes the data path—what’s accessed, summarized, reviewed, and potentially used for model improvement. ...
Gemini-powered Siri: What’s confirmed, what’s reported, and how to prepare
Introduction TL;DR: Gemini-powered Siri is officially tied to an Apple–Google multi-year collaboration on next-gen Apple Foundation Models. The “February unveil” is reported (not confirmed). Prepare via Private Cloud Compute (PCC) governance and App Intents readiness. Context: In their joint statement, Apple and Google say future Apple Intelligence features—including a more personalized Siri—will be powered by models based on Google’s Gemini and cloud technology. Why it matters: Treat dates as non-authoritative until Apple confirms. Treat platform mechanics (data flow, app actions, policy controls) as authoritative now. ...
Meta AI characters: Why Meta paused teen chat and what product teams should do
Introduction TL;DR: On 2026-01-23, Meta said it will temporarily pause teens’ access to Meta AI characters and build a new version with stronger parental controls. The restriction starts “in the coming weeks” and remains until the updated experience is ready. Context: This is not just moderation—it’s a shift toward age-segmented AI products (teens vs adults), aligned with policy and risk frameworks. Why it matters: If your AI product supports open-ended conversation, you should assume “teen mode” requires separate UX, controls, logging, and governance. ...
Google Discover Makes AI-Generated Headlines Permanent: Trust and Publisher Impact
Introduction TL;DR: Google confirmed AI-generated “trending topics” headlines in Discover are a permanent feature, not an experiment. The company says these are topic overviews across multiple sources, not rewrites of a single publisher’s headline, but critics say the UI and frequent inaccuracies erode trust. Why it matters: When the “headline layer” shifts from publishers to platforms, accountability and brand trust become harder to preserve at scale. What Changed: From “Experiment” to “Feature” In December 2025, Google described AI headline replacement as a limited UI experiment for some Discover users. By late January 2026, Google told The Verge this is a “feature” that “performs well for user satisfaction.” ...
Hallucinated citations in NeurIPS papers: what broke and how to fix it
Introduction TL;DR: GPTZero reported 100 confirmed fake references across 51 accepted NeurIPS 2025 papers, and the incident spotlights how AI-generated “reference slop” can slip through elite peer review. In the first place, hallucinated citations are not “minor typos”—they break the verifiability chain that science relies on. Why it matters: Citations are the audit trail. If the trail is fabricated, readers can’t reproduce or validate claims. What was found GPTZero said it scanned the full set of accepted NeurIPS papers and confirmed 100 hallucinated citations across 51 papers. Some reports mention a slightly different paper count (e.g., “at least 53”), which typically reflects differences in counting criteria or update timing. ...
ChatGPT Ads: What OpenAI Promised, Why Markey Objected, and What It Means for Trust
Introduction TL;DR: OpenAI announced on 2026-01-16 it will test ads in the U.S. for logged-in adults on ChatGPT Free and Go, placing clearly labeled sponsored units below answers and claiming ads won’t affect responses or sell user data. DeepMind CEO Demis Hassabis said he was surprised OpenAI moved so fast, warning that ads inside “assistant” experiences can undermine trust; he said Gemini has no current plans for ads. Sen. Ed Markey demanded answers by 2026-02-12, flagging deceptive “blurred” advertising, youth safety risks, and the use of sensitive conversations for ad targeting. OpenAI’s ChatGPT ads debate isn’t just monetization—it’s about whether conversational systems can keep trust boundaries intact when commercial incentives arrive. ...
How to Design AI Governance That Actually Works in Organizations
Introduction TL;DR: AI governance fails when treated as paperwork It succeeds when embedded into decision-making structures Organization design matters more than tools Governance Is About Control Points Effective governance defines where humans intervene. Uncontrolled AI creates unmanaged risk. Accountability Must Stay Human AI must fit into existing responsibility structures. Responsibility cannot be automated. Conclusion AI governance is an organizational issue Design beats documentation Sustainability requires structure Summary Governance must be embedded, not declared Responsibility and control define success AI longevity depends on governance Recommended Hashtags: #AI거버넌스 #기업AI #조직설계 #위험관리 #AI운영 ...