Machine Learning Guide
- Autor: Vários
- Narrador: Vários
- Editora: Podcast
- Duração: 42:21:33
- Mais informações
Informações:
Sinopse
This series aims to teach you the high level fundamentals of machine learning from A to Z. I'll teach you the basic intuition, algorithms, and math. We'll discuss languages and frameworks, deep learning, and more. Audio may be an inferior medium to task; but with all our exercise, commute, and chores hours of the day, not having an audio supplementary education would be a missed opportunity. And where your other resources will provide you the machine learning trees, Ill provide the forest. Additionally, consider me your syllabus. At the end of every episode Ill provide the best-of-the-best resources curated from around the web for you to learn each episodes details.
Episódios
-
MLA 030 AI Job Displacement & ML Careers
26/02/2026 Duração: 42minML engineering demand remains high with a 3.2 to 1 job-to-candidate ratio, but entry-level hiring is collapsing as AI automates routine programming and data tasks. Career longevity requires shifting from model training to production operations, deep domain expertise, and mastering AI-augmented workflows before standard implementation becomes a commodity. Links Notes and resources at ocdevel.com/mlg/mla-30 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Data and Displacement ML engineering demand rose 89% in early 2025. Median salary is $187,500, with senior roles reaching $550,000. There are 3.2 open jobs for every qualified candidate. AI-exposed roles for workers aged 22 to 25 declined 13 to 16%, while workers over 30 saw 6 to 12% growth. Professional service job openings dropped 20% year-over-year by January 2025. Microsoft cut 15,000 roles, targeting software engineers, and 30% of its code
-
MLA 029 OpenClaw
22/02/2026 Duração: 51minOpenClaw is a self-hosted AI agent daemon that executes autonomous tasks through messaging apps like WhatsApp and Telegram using persistent memory. It integrates with Claude Code to enable software development and administrative automation directly from mobile devices. Links Notes and resources at ocdevel.com/mlg/mla-29 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want OpenClaw is a self-hosted AI agent daemon (Node.js, port 18789) that executes autonomous tasks via messaging apps like WhatsApp or Telegram. Developed by Peter Steinberger in November 2025, the project reached 196,000 GitHub stars in three months. Architecture and Persistent Memory Operational Loop: Gateway receives message, loads SOUL.md (personality), USER.md (user context), and MEMORY.md (persistent history), calls LLM for tool execution, streams response, and logs data. Memory System: Compounds context over months. Users should pro
-
MLA 028 AI Agents
22/02/2026 Duração: 37minAI agents differ from chatbots by pursuing autonomous goals through the ReACT loop rather than responding to turn-based prompts. While coding agents are currently the most reliable due to verifiable feedback loops, the market is expanding into desktop and browser automation via tools like Claude co-work and open claw. Links Notes and resources at ocdevel.com/mlg/mla-28 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Fundamental Definitions Agent vs. Chatbot: Chatbots are turn-based and human-driven. Agents receive objectives and dynamically direct their own processes. The ReACT Loop: Every modern agent uses the cycle: Thought -> Action -> Observation. This interleaved reasoning and tool usage allows agents to update plans and handle exceptions. Performance: Models using agentic loops with self-correction outperform stronger zero-shot models. GPT-3.5 with an agent loop scored 95.1% on HumanEval, whi
-
MLA 027 AI Video End-to-End Workflow
14/07/2025 Duração: 01h11minHow to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3's "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing. Links Notes and resources at ocdevel.com/mlg/mla-27 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want AI Audio Tool Selection Music: Use Suno for complete songs or Udio for high-quality components for professional editing. Sound Effects: Use ElevenLabs' SFX for integrated podcast production or SFX Engine for large, licensed asset libraries for games and film. Voice: ElevenLabs gives the most realistic voice output. Murf.ai offers an all-in-one stud
-
MLA 026 AI Video Generation: Veo 3 vs Sora, Kling, Runway, Stable Video Diffusion
12/07/2025 Duração: 40minGoogle Veo leads the generative video market with superior 4K photorealism and integrated audio, an advantage derived from its YouTube training data. OpenAI Sora is the top tool for narrative storytelling, while Kuaishou Kling excels at animating static images with realistic, high-speed motion. Links Notes and resources at ocdevel.com/mlg/mla-26 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want S-Tier: Google Veo The market leader due to superior visual quality, physics simulation, 4K resolution, and integrated audio generation, which removes post-production steps. It accurately interprets cinematic prompts ("timelapse," "aerial shots"). Its primary advantage is its integration with Google products, using YouTube's vast video library for rapid model improvement. The professional focus is clear with its filmmaking tool, "Flow." A-Tier: Sora & Kling OpenAI Sora: Excels at interpreting complex narra
-
MLA 025 AI Image Generation: Midjourney vs Stable Diffusion, GPT-4o, Imagen & Firefly
09/07/2025 Duração: 01h12minThe AI image market has split: Midjourney creates the highest quality artistic images but fails at text and precision. For business use, OpenAI's GPT-4o offers the best conversational control, while Adobe Firefly provides the strongest commercial safety from its exclusively licensed training data. Links Notes and resources at ocdevel.com/mlg/mla-25 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want The 2025 generative AI image market is defined by a split between two types of tools. "Artists" like Midjourney excel at creating beautiful, high-quality images but lack precise control. "Collaborators" like OpenAI's GPT-4o and Google's Imagen 4 are integrated into language models, excelling at following complex instructions and accurately rendering text. Standing apart are the open-source "Sovereign Toolkit" Stable Diffusion, which offers users total control, and Adobe Firefly, a "Professional's Walled Gard
-
MLG 036 Autoencoders
30/05/2025 Duração: 01h05minAuto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, interpretability, and synthetic data generation. Links Notes and resources at ocdevel.com/mlg/36 Try a walking desk - stay healthy & sharp while you learn & code Build the future of multi-agent software with AGNTCY. Thanks to T.J. Wilder from intrep.io for recording this episode! Fundamentals of Autoencoders Autoencoders are neural networks designed to reconstruct their input data by passing data through a compressed intermediate representation called a “code.” The architecture typically follows an hourglass shape: a wide input and output separated by a narrower bottleneck layer that enforces information compression. The encoder compresses input data int
-
MLG 035 Large Language Models 2
08/05/2025 Duração: 45minAt inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without up
-
MLG 034 Large Language Models 1
07/05/2025 Duração: 50minExplains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequent
-
MLA 024 Agentic Software Engineering
13/04/2025 Duração: 45minAgentic engineering shifts the developer role from manual coding to orchestrating AI agents that automate the full software lifecycle from ticket to deployment. Using Claude Code with MCP servers and git worktrees allows a single person to manage the output and quality of an entire engineering organization. Links Notes and resources at ocdevel.com/mlg/mla-24 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want The Shift: Agentic Engineering Andrej Karpathy transitioned from "vibe coding" in February 2025 to "agentic engineering" in February 2026. This shift represents moving from casual AI use to using agents as the primary production coding interface. The goal is to automate the software engineering lifecycle, allowing a single person to manage system design and outcomes while agents handle implementation. Tooling and Context Efficiency Minimize MCP servers to preserve context. 12 active servers consume
-
MLA 023 Claude Code Components
13/04/2025 Duração: 01h08minClaude Code distinguishes itself through a deterministic hook system and model-invoked skills that maintain project consistency better than visual-first tools like Cursor. Its multi-surface architecture allows developers to move sessions between CLI, web sandboxes, and mobile while maintaining persistent context. Links Notes and resources at ocdevel.com/mlg/mla-23 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Agent Comparison Cursor: VS Code fork. Uses visual interactions (Cmd+K, Composer mode), multi-line tab completion, and background cloud agents. Credit-based billing ($20 to $200). Codex CLI: Terminal-first Rust agent. Uses GPT-5.3-Codex. Features three autonomy modes (Suggest, Auto-approve, Full Auto). Included in $20 ChatGPT Plus. Antigravity: Agent-first interface using Gemini 3 Pro. Manager View orchestrates parallel agents that produce verifiable task lists and recordings. Claude Code: T
-
MLA 022 Vibe Coding
09/02/2025 Duração: 17minAndrej Karpathy coined "vibe coding" in February 2025 - a year later, 41% of all code is AI-generated, agents run multi-hour tasks autonomously, and the developer role has shifted from writing code to orchestrating systems. Links Notes and resources at ocdevel.com/mlg/mla-22 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want In February 2025, Andrej Karpathy posted a tweet describing how he'd stopped reading diffs, hit "Accept All" on every suggestion, and just copy-pasted error messages back into the chat. He called it "vibe coding" - fully giving in to the vibes and forgetting the code even exists. The post got 4.5 million views. By late 2025, Collins Dictionary named it Word of the Year. But this wasn't a sudden invention. It was the culmination of a four-year arc that started with GitHub Copilot's line-by-line autocomplete in 2021 and accelerated through GPT-4, 192K+ token context windows, reasonin
-
MLG 033 Transformers
09/02/2025 Duração: 43minLinks: Notes and resources at ocdevel.com/mlg/33 3Blue1Brown videos: https://3blue1brown.com/ Try a walking desk stay healthy & sharp while you learn & code Try Descript audio/video editing with AI power-tools Background & Motivation RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them inefficient on modern hardware. Breakthrough: “Attention Is All You Need” replaced recurrence with self-attention, unlocking massive parallelism and scalability. Core Architecture Layer Stack: Consists of alternating self-attention and feed-forward (MLP) layers, each wrapped in residual connections and layer normalization. Positional Encodings: Since self-attention is permutation invariant, add sinusoidal or learned positional embeddings to inject sequence order. Self-Attention Mechanism Q, K, V Explained: Query (Q): The representation of the token seeking contextual info. Key (K): The representation of tokens being compared against. Value (V): The inf
-
MLA 021 Databricks: Cloud Analytics and MLOps
22/06/2022 Duração: 26minDatabricks is a cloud-based platform for data analytics and machine learning operations, integrating features such as a hosted Spark cluster, Python notebook execution, Delta Lake for data management, and seamless IDE connectivity. Raybeam utilizes Databricks and other ML Ops tools according to client infrastructure, scaling needs, and project goals, favoring Databricks for its balanced feature set, ease of use, and support for both startups and enterprises. Links Notes and resources at ocdevel.com/mlg/mla-21 Try a walking desk stay healthy & sharp while you learn & code Raybeam and Databricks Raybeam is a data science and analytics company, recently acquired by Dept Agency. While Raybeam focuses on data analytics, its acquisition has expanded its expertise into ML Ops and AI. The company recommends tools based on client requirements, frequently utilizing Databricks for its comprehensive nature. Understanding Databricks Databricks is not merely an analytics platform; it is a competitor in the
-
MLA 020 Kubeflow and ML Pipeline Orchestration on Kubernetes
29/01/2022 Duração: 01h08minMachine learning pipeline orchestration tools, such as SageMaker and Kubeflow, streamline the end-to-end process of data ingestion, model training, deployment, and monitoring, with Kubeflow providing an open-source, cross-cloud platform built atop Kubernetes. Organizations typically choose between cloud-native managed services and open-source solutions based on required flexibility, scalability, integration with existing cloud environments, and vendor lock-in considerations. Links Notes and resources at ocdevel.com/mlg/mla-20 Try a walking desk stay healthy & sharp while you learn & code Dirk-Jan Verdoorn - Data Scientist at Dept Agency Managed vs. Open-Source ML Pipeline Orchestration Cloud providers such as AWS, Google Cloud, and Azure offer managed machine learning orchestration solutions, including SageMaker (AWS) and Vertex AI (GCP). Managed services provide integrated environments that are easier to set up and operate but often result in vendor lock-in, limiting portability across cloud pl
-
MLA 019 Cloud, DevOps & Architecture
13/01/2022 Duração: 01h15minThe deployment of machine learning models for real-world use involves a sequence of cloud services and architectural choices, where machine learning expertise must be complemented by DevOps and architecture skills, often requiring collaboration with professionals. Key concepts discussed include infrastructure as code, cloud container orchestration, and the distinction between DevOps and architecture, as well as practical advice for machine learning engineers wanting to deploy products securely and efficiently. Links Notes and resources at ocdevel.com/mlg/mla-19 Try a walking desk stay healthy & sharp while you learn & code ;## Translating Machine Learning Models to Production After developing and training a machine learning model locally or using cloud tools like AWS SageMaker, it must be deployed to reach end users. A typical deployment stack involves the trained model exposed via a SageMaker endpoint, a backend server (e.g., Python FastAPI on AWS ECS with Fargate), a managed database (such as
-
MLA 017 AWS Local Development Environment
06/11/2021 Duração: 01h04minAWS development environments for local and cloud deployment can differ significantly, leading to extra complexity and setup during cloud migration. By developing directly within AWS environments, using tools such as Lambda, Cloud9, SageMaker Studio, client VPN connections, or LocalStack, developers can streamline transitions to production and leverage AWS-managed services from the start. This episode outlines three primary strategies for treating AWS as your development environment, details the benefits and tradeoffs of each, and explains the role of infrastructure-as-code tools such as Terraform and CDK in maintaining replicable, trackable cloud infrastructure. Links Notes and resources at ocdevel.com/mlg/mla-17 Try a walking desk stay healthy & sharp while you learn & code Docker Fundamentals for Development Docker containers encapsulate operating systems, packages, and code, which simplifies dependency management and deployment. Files are added to containers using either the COPY command for
-
MLA 016 AWS SageMaker MLOps 2
05/11/2021 Duração: 01h25sSageMaker streamlines machine learning workflows by enabling integrated model training, tuning, deployment, monitoring, and pipeline automation within the AWS ecosystem, offering scalable compute options and flexible development environments. Cloud-native AWS machine learning services such as Comprehend and Poly provide off-the-shelf solutions for NLP, time series, recommendations, and more, reducing the need for custom model implementation and deployment. Links Notes and resources at ocdevel.com/mlg/mla-16 Try a walking desk stay healthy & sharp while you learn & code Model Training and Tuning with SageMaker SageMaker enables model training within integrated data and ML pipelines, drawing from components such as Data Wrangler and Feature Store for a seamless workflow. Using SageMaker for training eliminates the need for manual transitions from local environments to the cloud, as models remain deployable within the AWS stack. SageMaker Studio offers a browser-based IDE environment with iPython
-
MLA 015 AWS SageMaker MLOps 1
04/11/2021 Duração: 47minSageMaker is an end-to-end machine learning platform on AWS that covers every stage of the ML lifecycle, including data ingestion, preparation, training, deployment, monitoring, and bias detection. The platform offers integrated tools such as Data Wrangler, Feature Store, Ground Truth, Clarify, Autopilot, and distributed training to enable scalable, automated, and accessible machine learning operations for both tabular and large data sets. Links Notes and resources at ocdevel.com/mlg/mla-15 Try a walking desk stay healthy & sharp while you learn & code Amazon SageMaker: The Machine Learning Operations Platform MLOps is deploying your ML models to the cloud. See MadeWithML for an overview of tooling (also generally a great ML educational run-down.) Introduction to SageMaker and MLOps SageMaker is a comprehensive platform offered by AWS for machine learning operations (MLOps), allowing full lifecycle management of machine learning models. Its popularity provides access to extensive resources, educ
-
MLA 014 Machine Learning Hosting and Serverless Deployment
18/01/2021 Duração: 49minBuilders can scale ML from simple API calls to full MLOps pipelines using SST on AWS, utilizing Aurora pgvector for search and Spot instances for 90 percent cost savings. External platforms like Modal or GCP Cloud Run provide superior serverless GPU options for real-time inference when AWS native limits are reached. Links Notes and resources at ocdevel.com/mlg/mla-14 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Core Infrastructure SST uses Pulumi to bridge high-level web components (API, Database) with low-level AWS resources (SageMaker, GPU clusters). The framework enables infrastructure-as-code in TypeScript, allowing developers to manage entire ML lifecycles within a single configuration. Level 1-2: Foundational Models and Edge Inference AWS Bedrock: Managed gateway for models including Claude 4.5, Llama 4, and Nova. It provides IAM security, VPC isolation, and integrated billing. Knowledge B