🎣 JudeW's Knowledge Brain

Search

SearchSearch

Recent writing

  • LLM cache hit

    Apr 29, 2026

    • #llm
    • #cache
    • #inference
  • Activated Params in MoE Models

    Apr 29, 2026

    • #LLM
    • #architecture
    • #MoE
    • #inference
  • DeepSeek V4 Architecture Tricks

    Apr 29, 2026

    • #LLM
    • #architecture
    • #attention
    • #long-context
    • #MoE
    • #DeepSeek
Home

❯

computer_sci

❯

llm

❯

basic

❯

LLM Basic MOC

LLM Basic MOC

Apr 14, 2026, 1 min read

  • #LLM
  • #basic
  • #MOC

Input

  • Tokenizer in Modern LLMs
  • BPE Encoding Complexity and Optimization
  • Input Embedding in Modern Transformers

Architecture

  • Transformer in LLM
  • Activated Params in MoE Models
  • DeepSeek V4 Architecture Tricks

Evaluation

  • perplexity

Graph View

  • Input
  • Architecture
  • Evaluation

Backlinks

  • BPE Encoding Complexity and Optimization
  • Input Embedding in LLMs
  • Tokenizer in Modern LLMs

Created with Quartz v4.2.3 © 2026

  • GitHub
  • Instagram
  • Strava