Welcome to Royfactory

개발, AI, Kubernetes, 백엔드 기술 관련 최신 글을 다룹니다.

Neural Network Basics: Build a Simple Image Classification Model (Lecture 8)

Neural Network Basics: Build a Simple Image Classification Model (Lecture 8) In this lecture, we’ll introduce Neural Networks, explain their core components, and build a simple image classification model using the MNIST handwritten digits dataset with TensorFlow/Keras. Table of Contents {% toc %} 1) What Is a Neural Network? A Neural Network is made up of interconnected units called neurons, organized in layers. Data flows through Input → Hidden Layers → Output, with weights and activations applied at each step. ...

August 17, 2025 · 3 min · 454 words · Roy

Recurrent Neural Network (RNN) Basics: Theory and PyTorch Implementation (Lecture 11)

Recurrent Neural Network (RNN) Basics: Theory and PyTorch Implementation (Lecture 11) In this lecture, we’ll explore Recurrent Neural Networks (RNNs), one of the fundamental architectures for handling sequential data. We’ll cover the theory behind RNNs, their mathematical formulation, limitations, and implement simple RNNs in PyTorch for both text and time-series prediction. Table of Contents {% toc %} 1) What is an RNN? Unlike feedforward networks that treat each input independently, RNNs are designed to remember previous states and use them in predicting future outputs. This makes RNNs highly effective for tasks where context and sequence order matter, such as: ...

August 16, 2025 · 3 min · 557 words · Roy

Word Embeddings in NLP: From Word2Vec to Transformers (Lecture 10)

Word Embeddings in NLP: From Word2Vec to Transformers (Lecture 10) In this lecture, we will explore Word Embeddings, a fundamental concept in Natural Language Processing (NLP) that allows machines to understand words in terms of vectors. Instead of treating words as discrete symbols, embeddings capture semantic meaning by placing similar words closer in a vector space. Table of Contents {% toc %} 1) What Are Word Embeddings? Word embeddings are numerical representations of words in a continuous vector space. ...

August 16, 2025 · 3 min · 482 words · Roy

Data Preprocessing and Visualization for AI: A Complete Guide (Lecture 5)

Data Preprocessing and Visualization for AI: A Complete Guide (Lecture 5) In this lecture, we’ll cover data preprocessing—the crucial step to ensure your AI models work with clean, structured, and meaningful data. We’ll also explore data visualization techniques to better understand your dataset. Table of Contents {% toc %} 1) Why Data Preprocessing Matters Model performance depends heavily on data quality. Even the most advanced algorithms can fail if the input data is noisy or inconsistent. ...

August 14, 2025 · 3 min · 429 words · Roy

Supervised Learning Practice: Classification and Regression (Lecture 6)

Supervised Learning Practice: Classification and Regression (Lecture 6) In this lecture, we’ll explore Supervised Learning, understand the difference between Classification and Regression, review popular algorithms, and implement both tasks using scikit-learn. Table of Contents {% toc %} 1) What Is Supervised Learning? Supervised Learning uses input data (X) paired with labels (y) to train a model that can predict the correct output for unseen inputs. 1.1 Classification vs. Regression Type Description Output Examples Use Cases Classification Predicts a category Spam/Not spam, species Spam detection, diagnosis Regression Predicts a continuous value Price, temperature House price, sales forecast 2) Classification 2.1 Concept Assigns each input to one of several classes. Example: “Is this email spam?” 2.2 Common Algorithms Logistic Regression Decision Tree Support Vector Machine (SVM) Random Forest 3) Regression 3.1 Concept Predicts a continuous value based on input features. Example: “Predict apartment price given size, location, and year built.” 3.2 Common Algorithms Linear Regression Ridge Regression Lasso Regression Decision Tree Regression 4) General Supervised Learning Workflow Prepare data: Separate features (X) and labels (y) Split into training and testing sets Choose a model and train it Predict on test data Evaluate model performance Improve results via hyperparameter tuning 5) Lab: Classification Example (Iris Species) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score, classification_report # Load data iris = load_iris() X, y = iris.data, iris.target # Train/test split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42, stratify=y ) # Train model model = LogisticRegression(max_iter=200) model.fit(X_train, y_train) # Predict y_pred = model.predict(X_test) # Evaluate print("Accuracy:", f"{accuracy_score(y_test, y_pred)*100:.2f}%") print(classification_report(y_test, y_pred, target_names=iris.target_names)) 6) Lab: Regression Example (California Housing Prices) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score import numpy as np # Load data housing = fetch_california_housing() X, y = housing.data, housing.target # Train/test split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) # Train model reg = LinearRegression() reg.fit(X_train, y_train) # Predict y_pred = reg.predict(X_test) # Evaluate mse = mean_squared_error(y_test, y_pred) rmse = np.sqrt(mse) r2 = r2_score(y_test, y_pred) print(f"RMSE: {rmse:.3f}") print(f"R² Score: {r2:.3f}") 7) Evaluation Metrics Classification ...

August 13, 2025 · 3 min · 478 words · Roy

Unsupervised Learning Practice: Clustering and Dimensionality Reduction (Lecture 7)

Unsupervised Learning Practice: Clustering and Dimensionality Reduction (Lecture 7) In this lecture, we’ll explore Unsupervised Learning, understand the concepts of Clustering and Dimensionality Reduction, and implement both techniques using scikit-learn. Table of Contents {% toc %} 1) What Is Unsupervised Learning? Unsupervised Learning finds patterns, structures, or relationships in data without labels. Unlike supervised learning, there is no “answer key”—the model discovers hidden rules on its own. 1.1 Common Applications Customer Segmentation: Grouping customers based on purchase history Anomaly Detection: Fraud detection, early fault detection in systems Data Visualization: Reducing high-dimensional data to 2D/3D for interpretation 2) Clustering Clustering groups similar data points together. Popular algorithms include: ...

August 13, 2025 · 2 min · 403 words · Roy

AI Development Environment Setup: Anaconda, Jupyter, and GPU Acceleration (Lecture 4)

AI Development Environment Setup: Anaconda, Jupyter, and GPU Acceleration (Lecture 4) In this lecture, we’ll set up a stable AI development environment for Machine Learning and Deep Learning projects. You’ll learn how to install Anaconda, run Jupyter Notebook, and configure GPU acceleration with CUDA and cuDNN. Table of Contents {% toc %} 1) Why Environment Setup Matters A well-configured environment prevents common issues such as: Library version conflicts Slow training due to CPU-only execution Non-reproducible results across team members Goals: ...

August 12, 2025 · 2 min · 397 words · Roy

Deep Learning Basics: CNN vs. RNN and a Hands-On MNIST Example (Lecture 3)

Deep Learning Basics: CNN vs. RNN and a Hands-On MNIST Example (Lecture 3) This is Lecture 3 of our AI 101 series. We’ll explain what Deep Learning is, compare CNNs and RNNs, and finish with a verified TensorFlow/Keras lab where you build a CNN to classify MNIST handwritten digits. Table of Contents {% toc %} 1) What Is Deep Learning? Deep Learning is a subset of Machine Learning that uses multi-layer artificial neural networks to model complex patterns in data—especially effective for images, audio, and text. ...

August 12, 2025 · 4 min · 644 words · Roy

Machine Learning Basics: Supervised, Unsupervised, and Reinforcement Learning (Lecture 2)

Machine Learning Basics: Supervised, Unsupervised, and Reinforcement Learning (Lecture 2) This is Lecture 2 of our AI 101 series. We’ll break down three core types of Machine Learning, explore their real-world applications, and finish with a verified scikit-learn lab that runs locally without internet access. Table of Contents {% toc %} 1) What Is Machine Learning? Machine Learning (ML) is the process of teaching computers to learn patterns from data and make predictions without being explicitly programmed with rules. ...

August 11, 2025 · 4 min · 674 words · Roy

AI 101: From Concepts to a Working Example (Lecture 1)

AI 101: From Concepts to a Working Example (Lecture 1) This is Lecture 1 of a 20-part series. We’ll cover what AI is, a short history, where it’s used, and finish with a hands-on lab you can run locally without any external downloads. Table of Contents {% toc %} 1) What Is AI? Artificial Intelligence (AI) enables computers to perform tasks that typically require human intelligence—learning, reasoning, perception, and language understanding. ...

August 10, 2025 · 5 min · 853 words · Roy