Sparse Autoencoders (SAEs) and Cross-Layer Transcoders (CLTs) are two approaches to interpretability of transformer models. Read up on what they're good for and how they differ.
A brief history of LLM Scaling Laws from compute-optimal training and inference to scaling test-time compute and whether Scaling Laws are coming to an end.