AI Sales Forecasting Part 8: Hierarchies, Cold-Start, and Promotion Uplift
Introduction TL;DR: AI Sales Forecasting must stay consistent across planning levels (total/category/SKU). The common production pattern is (1) generate base forecasts, then (2) apply forecast reconciliation (e.g., MinT) to enforce coherence. For new items, “cold-start” is solved by borrowing signal from hierarchies and similar items (metadata/content/price tiers). Promotions should be designed either as model features or as a separate uplift (counterfactual) estimation pipeline (e.g., CausalImpact/BSTS). Why it matters: Without coherence, different teams will operate on different numbers, breaking replenishment and planning alignment. ...
AI Sales Forecasting to Replenishment: Service Levels, Safety Stock, and Reorder Point (Part 6)
Introduction TL;DR: AI Sales Forecasting becomes valuable only when it drives ordering decisions. Build a lead-time (or protection-period) demand distribution, pick the right service metric (CSL vs fill rate), and set reorder point/order-up-to levels using quantiles. Avoid “adding daily P95s” to get a lead-time P95—use sample-path aggregation. For reliable uncertainty, calibrate prediction intervals (e.g., conformal forecasting). Why it matters: Forecast accuracy is not the objective; meeting service targets at minimal total cost is. ...
AI Sales Forecasting Part 4: Feature-based ML Design for Demand Forecasting
Introduction TL;DR: AI Sales Forecasting with feature-based ML turns time series into a supervised regression problem using lags/rolling stats, calendar signals, and exogenous variables. The winning recipe is: feature taxonomy → point-in-time correctness → rolling-origin backtests → WAPE → quantile forecasts. Why it matters: This approach scales across many SKUs/stores and stays maintainable when your catalog grows. 1) What “feature-based ML” means for sales forecasting Definition, scope, common misconception Definition: convert time series into a feature table (lags/rollings/calendar/exogenous) and fit a regressor (GBDT). Misconception: “GBDT can’t do time series.” It can, if the feature pipeline and validation are correct. Why it matters: Most failures come from leakage and bad validation, not from the model class. ...
AI Sales Forecasting: Backtesting with Rolling-Origin CV, Baselines, and Report Gates (Part 3)
Introduction TL;DR: AI Sales Forecasting must be evaluated using genuine forecasts on unseen data, not training residuals. Use rolling forecasting origin (rolling-origin CV) with explicit choices: horizon, step, window type, and refit policy. Report WAPE + MASE (and pinball loss for quantiles) and compare everything against two fixed baselines: seasonal naive + ETS. In this lecture-style part, you’ll build a backtest setup that matches deployment conditions and produces a decision-ready report. ...
AI Sales Forecasting: Data Modeling Template for Demand Forecasting (Part 2)
Introduction TL;DR: AI Sales Forecasting often fails due to data semantics (schemas, time meaning, leakage), not model choice. Model your sources as sales + calendar + price + promo + inventory/stockouts, then build a stable training/inference view. Enforce point-in-time correctness for time-series feature joins to prevent leakage. Treat stockouts as censored demand and track them explicitly. In this Part 2, you’ll get a practical data model and validation rules you can lift into a warehouse/lakehouse. ...
AI Sales Forecasting: Designing an AI-based Demand Forecasting System (Part 1)
Introduction TL;DR: AI Sales Forecasting succeeds when forecasts are tied to decisions (inventory, ordering, staffing), not when a model merely outputs numbers. Use an end-to-end flow: requirements → data contract → baselines + backtesting → model strategy → probabilistic forecasts → deployment + monitoring. Prefer probabilistic forecasting (quantiles/intervals) when under- and over-forecasting costs are asymmetric. In this series, AI Sales Forecasting is treated as a production system: dataset design, evaluation, deployment mode, and operational guardrails come first. ...
Open LLM Leaderboard trends: reading Hugging Face v2 without fooling yourself
Introduction TL;DR: Open LLM Leaderboard v2 shifts evaluation toward instruction-following, hard reasoning, long-context multi-step reasoning, and difficult science QA. In the public v2 “contents” view, the Average ranges from 0.74 to ~52.1, and GPQA / MuSR are clear bottlenecks (their maxima are much lower than other tasks). Top entries often include merged/community-tuned models, so you should separate “leaderboard performance” from “production-ready choice.” Why it matters: If you treat a leaderboard rank as a production verdict, you’ll pick the wrong model. ...
2026 Big Tech AI infrastructure spending $650B: what the capex numbers really mean
Introduction TL;DR: Media summaries put 2026 Big Tech AI infrastructure spending $650B at roughly $650B, while Reuters frames it as more than $630B. (Bloomberg.com) Amazon guided about $200B (company-wide capex), Alphabet guided $175B–$185B, and Meta guided $115B–$135B including finance lease principal payments. (Amazon) The “total” varies mostly because definitions (leases vs cash PP&E) and periods (calendar vs fiscal year) don’t line up perfectly across companies. (Microsoft) Context (first paragraph): 2026 Big Tech AI infrastructure spending $650B is a shorthand for a hyperscaler capex super-cycle aimed at AI data centers, accelerated computing, and networking. Reuters describes the same theme as over $630B combined. (Bloomberg.com) ...
Alphabet 2026 CapEx: What a near-doubling means for AI infrastructure, cost, and ops
Introduction TL;DR: Alphabet guided Alphabet 2026 CapEx to $175-$185B, vs. $91.447B in 2025 property & equipment purchases (roughly 1.9-2.0x). The company ties the ramp to meeting customer demand and expanding AI infrastructure, alongside strong FY2025 results. For practitioners, the takeaway is not “AI hype,” but the concrete need to harden capacity planning, FinOps controls, security, and observability. Why it matters: CapEx guidance is an operational signal: it shapes real-world capacity, constraints, and budget realities for AI workloads. ...
Nscale IPO: Nvidia 지원 네오클라우드의 상장 준비가 의미하는 것
Introduction TL;DR Nscale IPO는 상장 확정이 아니라 Goldman Sachs와 JPMorgan을 고용해 IPO를 준비 중이라는 의미이며, 일정은 미정입니다. 이 글에서는 “네오클라우드” 모델을 정의하고, 실제로 확인된 사실과 IPO 헤드라인을 실무 리스크 체크리스트로 정리합니다. 1) Definition: What is a “Neocloud”? One-sentence definition **네오클라우드(Neocloud)**는 범용 하이퍼스케일러가 아닌 GPU 중심 AI 훈련/추론에 특화된 클라우드 제공업체입니다. Scope (what it is / isn’t) Is: GPU 용량 + 데이터센터 운영을 AI 컴퓨팅 인프라로 판매 Isn’t: 모델 IP가 핵심 가치인 “AI 모델 회사” Common misconception “Nvidia-backed” ≠ “Nvidia 자회사”. 이는 보통 투자/파트너십/생태계 연계를 의미하며, 지배권을 뜻하지 않습니다. ...