Formation en personne:

AI Deep Learning Program
12 Weeks (6 × 4-Hour Sessions + Daily Embedded Coaching)
The AI Deep Learning Program is a 12-week hands-on program that teaches cross-functional teams to build, evaluate, and refine AI skill files, evaluators, and workflows on their actual codebase. Six trainer-led sessions run every two weeks, with a dedicated Embedded Engineer coaching your team daily between sessions. Participants leave with production-ready artifacts committed to their repositories — not slides they'll forget.
Veuillez nous contacter pour programmer un cours privé.
AI Deep Learning Program Course Details
The AI Deep Learning Program is built on a simple premise: AI skills only stick when teams build them into real work. The program pairs a senior trainer — who delivers six half-day sessions introducing techniques, establishing shared vocabulary, and building core skills — with a dedicated Embedded Engineer who works alongside your team every day between sessions, applying concepts to real projects, building habits, and extracting reusable skill files from actual project context.
The first six weeks build the foundation. Your team creates working skill files for Acceptance Criteria generation, unit test generation, and test planning. Each session produces a committed artifact your team uses immediately, not a concept they study later. The second six weeks make it rigorous. Participants learn to evaluate their own output, raise their skill files to a transferable standard, and build their first bounded agents and workflows.
Three roles participate together throughout the entire program — Product/Process, Development, and Testing — because the artifact chain from Acceptance Criteria to Tests to Test Plans only works when everyone understands every link.
AI Deep Learning Program Learning Objectives
Loop 1: Build the Foundation (Weeks 1–6)
Session 1: Acceptance Criteria Generation
Understand what makes Acceptance Criteria specific, testable, and AI-consumable
Build a working skill file that generates AC from real backlog stories
Commit the skill file to your repository and use it with the Embedded Engineer the next day
Session 2: Test Generation from Acceptance Criteria
Generate unit tests directly from stories and Acceptance Criteria using a skill file
Understand how AC quality drives test quality
Produce a test generation skill file and integrate it into your workflow
Session 3: Test Plan Generation
Assemble full coverage plans from Acceptance Criteria and generated tests
Create a test plan skill file that connects the full chain
Complete the AC → Tests → Test Plans artifact pipeline
Loop 2: Make It Rigorous (Weeks 7–12)
Session 4: Evaluating AI Output
Build evaluators that measure skill file output quality with structured frameworks
Learn to identify failure modes and systematically improve skill file performance
Target at least one workflow step at 95%+ accuracy
Session 5: Raising the Bar — Peer Review and Transferability
Peer-review skill files across the team for consistency and clarity
Raise skill files to a transferable standard any team member can use and get consistent results
Establish team conventions for skill file maintenance and versioning
Session 6: Bounded Agents and Workflows
Chain validated skill files into bounded agents and repeatable workflows
Define governance boundaries and failure-handling strategies
Plan ongoing adoption and integration into existing team processes
Who is the AI Deep Learning Program For?
Cross-functional software teams ready to embed AI into their daily workflow, not just experiment with it
Development teams that want to move beyond ad-hoc prompting to structured, repeatable AI-assisted processes
Product Owners and Business Analysts who define Acceptance Criteria and want AI to amplify their output
QA and Testing professionals seeking to generate tests and test plans from structured inputs
Engineering leads evaluating how to roll out AI tooling across their organization with measurable results
Teams that have tried AI coding tools but haven't seen consistent, reliable output
Prerequisites
Each participant should have a laptop with an IDE and access to at least one AI coding tool (Claude Code, Cursor, Windsurf, GitHub Copilot, or similar)
An active codebase or project the team is working on (artifacts are built against real work, not exercises)
No prior experience with AI skill files, evaluators, or agentic workflows is required
What You Leave With
Working Acceptance Criteria, test generation, and test plan skill files committed to your repository
Evaluators that measure and improve your skill file output quality
Peer-reviewed, transferable skill files any team member can use and get consistent results
At least one workflow step validated at 95%+ accuracy — reliable enough to trust
Bounded agents and workflows that chain skills into repeatable, governed automation
A daily practice established with the Embedded Engineer that outlasts the program