Foundations LLM Basic - Modern Transformer Notes Tokenization Tokenizer in Modern LLMs BPE Encoding Complexity and Optimization Input Embedding in Modern Transformers LLM Hyperparameter LLM Precision About perplexity Architecture LLM Architecture - MOC Training LLM Training - MOC Fine-tuning LLM Fine-tuning - MOC Inference and Serving LLM Inference - MOC RAG and Frameworks What’s RAG? LangChain Explained Evaluation Tasks to evaluate BERT - Maybe can be deployed in other LM Agent and Appendix agent SKILL mechanism Embed vs. Transform