Welcome to Royfactory

Latest articles on Development, AI, Kubernetes, and Backend Technologies.

AI Sales Forecasting: Data Modeling Template for Demand Forecasting (Part 2)

Introduction TL;DR: AI Sales Forecasting often fails due to data semantics (schemas, time meaning, leakage), not model choice. Model your sources as sales + calendar + price + promo + inventory/stockouts, then build a stable training/inference view. Enforce point-in-time correctness for time-series feature joins to prevent leakage. Treat stockouts as censored demand and track them explicitly. In this Part 2, you’ll get a practical data model and validation rules you can lift into a warehouse/lakehouse. ...

February 9, 2026 · 3 min · 581 words · Roy

AI Sales Forecasting: Designing an AI-based Demand Forecasting System (Part 1)

Introduction TL;DR: AI Sales Forecasting succeeds when forecasts are tied to decisions (inventory, ordering, staffing), not when a model merely outputs numbers. Use an end-to-end flow: requirements → data contract → baselines + backtesting → model strategy → probabilistic forecasts → deployment + monitoring. Prefer probabilistic forecasting (quantiles/intervals) when under- and over-forecasting costs are asymmetric. In this series, AI Sales Forecasting is treated as a production system: dataset design, evaluation, deployment mode, and operational guardrails come first. ...

February 8, 2026 · 4 min · 687 words · Roy

Open LLM Leaderboard trends: reading Hugging Face v2 without fooling yourself

Introduction TL;DR: Open LLM Leaderboard v2 shifts evaluation toward instruction-following, hard reasoning, long-context multi-step reasoning, and difficult science QA. In the public v2 “contents” view, the Average ranges from 0.74 to ~52.1, and GPQA / MuSR are clear bottlenecks (their maxima are much lower than other tasks). Top entries often include merged/community-tuned models, so you should separate “leaderboard performance” from “production-ready choice.” Why it matters: If you treat a leaderboard rank as a production verdict, you’ll pick the wrong model. ...

February 8, 2026 · 3 min · 608 words · Roy

2026 Big Tech AI infrastructure spending $650B: what the capex numbers really mean

Introduction TL;DR: Media summaries put 2026 Big Tech AI infrastructure spending $650B at roughly $650B, while Reuters frames it as more than $630B. (Bloomberg.com) Amazon guided about $200B (company-wide capex), Alphabet guided $175B–$185B, and Meta guided $115B–$135B including finance lease principal payments. (Amazon) The “total” varies mostly because definitions (leases vs cash PP&E) and periods (calendar vs fiscal year) don’t line up perfectly across companies. (Microsoft) Context (first paragraph): 2026 Big Tech AI infrastructure spending $650B is a shorthand for a hyperscaler capex super-cycle aimed at AI data centers, accelerated computing, and networking. Reuters describes the same theme as over $630B combined. (Bloomberg.com) ...

February 7, 2026 · 4 min · 678 words · Roy

Alphabet 2026 CapEx: What a near-doubling means for AI infrastructure, cost, and ops

Introduction TL;DR: Alphabet guided Alphabet 2026 CapEx to $175-$185B, vs. $91.447B in 2025 property & equipment purchases (roughly 1.9-2.0x). The company ties the ramp to meeting customer demand and expanding AI infrastructure, alongside strong FY2025 results. For practitioners, the takeaway is not “AI hype,” but the concrete need to harden capacity planning, FinOps controls, security, and observability. Why it matters: CapEx guidance is an operational signal: it shapes real-world capacity, constraints, and budget realities for AI workloads. ...

February 5, 2026 · 3 min · 472 words · Roy

Nscale IPO: Nvidia 지원 네오클라우드의 상장 준비가 의미하는 것

Introduction TL;DR Nscale IPO는 상장 확정이 아니라 Goldman Sachs와 JPMorgan을 고용해 IPO를 준비 중이라는 의미이며, 일정은 미정입니다. 이 글에서는 “네오클라우드” 모델을 정의하고, 실제로 확인된 사실과 IPO 헤드라인을 실무 리스크 체크리스트로 정리합니다. 1) Definition: What is a “Neocloud”? One-sentence definition **네오클라우드(Neocloud)**는 범용 하이퍼스케일러가 아닌 GPU 중심 AI 훈련/추론에 특화된 클라우드 제공업체입니다. Scope (what it is / isn’t) Is: GPU 용량 + 데이터센터 운영을 AI 컴퓨팅 인프라로 판매 Isn’t: 모델 IP가 핵심 가치인 “AI 모델 회사” Common misconception “Nvidia-backed” ≠ “Nvidia 자회사”. 이는 보통 투자/파트너십/생태계 연계를 의미하며, 지배권을 뜻하지 않습니다. ...

February 5, 2026 · 3 min · 522 words · Roy

SpaceX xAI 합병: 1.25조 달러 딜과 궤도 데이터센터 팩트체크

Introduction TL;DR SpaceX xAI 합병이 2026-02-02에 발표되었으며, 보도에 따르면 SpaceX(1조 달러) + xAI(2,500억 달러) 합산 가치평가가 언급됩니다. SpaceX의 태양광 기반 궤도 데이터센터 추진(FCC 제출)이 로켓/위성/AI 컴퓨팅을 연결하는 핵심 서사입니다. Grok 관련 규제 조사가 현재 진행 중이며, 영국 ICO가 2026-02-03에 X와 xAI에 대한 공식 조사를 개시했습니다. Context SpaceX xAI 합병은 “단순한 AI 인수"가 아닙니다. 이 거래는 AI 경쟁을 지상과 궤도를 아우르는 인프라 경쟁으로 재정의하면서, 동시에 거버넌스와 컴플라이언스 리스크를 증폭시킵니다. Why it matters: 이것은 제품 출시 뉴스가 아니라 실제 규제 노출이 있는 공급망(데이터 - 모델 - 컴퓨팅 - 배포) 이야기입니다. ...

February 4, 2026 · 3 min · 581 words · Roy

vibe coding과 ADHD: 생산성 올리고 사고 줄이는 운영법

vibe coding과 ADHD: 생산성 올리고 사고 줄이는 운영법 Introduction TL;DR: Vibe coding means letting an AI generate code without deeply caring about what it produced. That can feel frictionless—especially if you struggle with planning and organization—but it can also amplify risk unless you add structure. In this post, vibe coding is treated as a prototype-first technique, and the goal is a production-safe operating model using gates, small diffs, and accountability. Prerequisites Definitions, scope, and one common misconception Vibe coding (1 sentence): Telling an AI what you want and letting it generate the product, often without fully understanding the code. Not the same as AI-assisted programming: If you review, test, and understand the output, many practitioners argue that is not vibe coding. ADHD (1 sentence): A neurodevelopmental disorder with persistent patterns of inattention and/or hyperactivity-impulsivity affecting functioning. Misconception: “AI always makes developers faster.” A randomized trial with experienced OSS devs found AI use took longer in that setting. Why it matters: You can’t run safe operations if your team mixes “prototype vibes” with “production responsibility.” ...

February 4, 2026 · 4 min · 685 words · Roy

vibe coding과 ADHD: 속도와 검증의 균형—실무 가드레일

Introduction TL;DR: vibe coding은 자연어로 “빠르게 만들고 돌리는” 방식이라 쓰기보다 검증에 부담이 쏠립니다. ADHD는 “집중력 문제”만이 아니라 성인에서 집행 기능(계획·조직화·시간관리) 부담으로 나타납니다. “Accept All”에 의존하면 특히 보안 리스크가 커지므로 테스트·리뷰·스캔 같은 하드 게이트가 필요합니다. (karpathy) What vibe coding is vibe coding is a workflow where you describe intent in plain language and iterate based on run results, often with minimal code reading. (X) Why it matters: speed without verification becomes “verification debt,” which is worse than normal tech debt. (NIST SP 800-218) ...

February 4, 2026 · 3 min · 494 words · Roy

Gartner AI 지출 2.52조 달러 전망: 인프라 과반 시대의 예산·운영 가이드

Introduction TL;DR: Gartner는 2026년 전 세계 AI 지출을 **2.52조 달러(+44% YoY)**로 전망했으며, 인프라가 절반 이상을 차지합니다. 이 글은 Gartner의 “AI 지출” 정의, 인프라 비중이 큰 이유, 예산·거버넌스·아키텍처에서의 실무 대응을 정리합니다. Why it matters: AI를 작은 “프로젝트 예산”으로 보면, 실제 지출이 몰리는 인프라·서비스·내장형 소프트웨어 비용을 놓치기 쉽습니다. What Gartner means by “AI spending” Gartner 보도자료는 AI 지출을 8개 시장(서비스, 사이버보안, 소프트웨어, 모델, DS/ML 플랫폼, 앱 개발 플랫폼, 데이터, 인프라)의 합으로 정의합니다. ...

February 3, 2026 · 3 min · 432 words · Roy