Jason Madar / jmadar -at- langara.ca
While all the assignment resources are provided in the bundled zip file, we believe what makes this assignment truly nifty is its zero-cost, one-click setup infrastructure. To demonstrate this, please visit the github repo: https://github.com/env3d/cs1-llm-local-chatbot
A quick start to help students get started:
| Header | Content |
|---|---|
| Summary | Local LLM Chatbot — students build a simple command-line chatbot powered by a local large language model (LLM). The one-click GitHub Codespaces setup removes all installation barriers. Students practice lists, dictionaries, loops, and conditional logic while discovering the stateless nature of LLMs and implementing conversation memory themselves. |
| Topics | A practical application of core Python data structures (lists, dictionaries), loops, and conditionals in the context of AI interaction and conversation management. Also introduces concepts of prompt engineering, context windows, and external state management for LLMs. |
| Audience | Appropriate for late CS1 or early CS2 students with basic Python programming knowledge. |
| Difficulty | An intermediate assignment, taking approximately 2–3 hours for CS1 students to complete. |
| Strengths | The one-click, zero-cost setup means students can run a working AI chatbot in minutes, entirely in the browser. Engagement is high because the chatbot produces authentic and sometimes surprising interactions. The assignment naturally sparks curiosity about AI limitations (like poor math skills) and provides an authentic context for practicing programming fundamentals. |
| Weaknesses | The LLM’s responses can be inconsistent, which may confuse students without instructor guidance. Some students may become more focused on the novelty of AI than on the programming concepts. |
| Dependencies | Requires a GitHub account and a reliable Internet connection for free access to GitHub Codespaces. No local installation or AI background knowledge needed. Works entirely in a web browser. |
| Variants | Students can extend the chatbot with selectable personalities, file-based personality loading (practice with File I/O), adjustments to randomness via temperature parameters, or attempts at prompt engineering “jailbreaks.” The infrastructure also enables larger projects, such as multi-bot interactions or creative storytelling exercises like the “Infinite Story” assignment. |
What makes the Local LLM Chatbot assignment nifty are two intertwined innovations:
One-click, zero-cost setup with GitHub Codespaces: Students can launch a pre-configured virtual environment in GitHub Codespaces that runs a local large language model (LLM) entirely within the cloud-based development container. Normally, enabling LLM inference requires complex installation steps or costly server-side infrastructure—both impractical in introductory courses. By bundling pre-compiled binaries and environment scripts, this assignment reduces setup to a single click, eliminating all technical barriers.
Authentic AI literacy through hands-on programming: As students interact with their chatbot, they quickly discover a fundamental property of LLMs—their stateless nature. Memory, learning, and context management must be implemented outside the model. This realization naturally leads to discussions about prompt engineering, context windows, and the limits of AI systems, all while practicing with core Python data structures like lists and dictionaries.
Together, these elements make the assignment engaging, accessible, and pedagogically powerful. Students not only implement and extend a working LLM-powered chatbot from the command line, but also confront the technical and conceptual realities of modern AI in a way that sparks curiosity and critical reflection—all within a 2–3 hour CS1-level exercise.
The assignment includes the following files:
index.html: this
document
handout.pdf: assignment
handout to students, also available at
https://github.com/env3d/cs1-llm-local-chatbot
chat.py: wrapper to simplify
interaction with the local LLM. For this assignment, they simply need to
include the chat() function via
import chat from chatmain.py: provides starter code
and is the only file students are expected to modify
test_assignment.py:
provides auto-grading. Students can check if they have completed the
assignment correctly by calling pytest in the
terminal
Local, small LLMs offer several advantages beyond privacy and cost, making them particularly valuable in educational settings. One key benefit is transparency. Unlike cloud-based LLMs, local models fail more visibly, allowing students to observe and analyze their behavior in a controlled environment. This transparency fosters a deeper understanding of how these models operate and where their limitations lie.
Another advantage is the absence of additional censorship layers. While local models can still be trained to avoid certain topics, they are not subject to external filters. This makes prompt engineering more straightforward, enabling students to experiment freely and achieve desired outcomes with less interference.
Finally, local LLMs are highly swappable. Students and educators can easily replace one model with another, facilitating experimentation and comparison. This flexibility encourages exploration and helps students grasp the nuances of different models and their applications.
While the recommended approach is the one-click setup with GitHub Codespaces, it is also possible to install and run the chatbot locally on a student or lab machine. This option is not required for the assignment, but is provided for completeness.
Steps for Local Installation
python3 -m venv llm-env
source llm-env/bin/activate # On Windows: llm-env\Scripts\activate
pip install llama-cpp-python
Warning: This step will trigger a full build process of llama.cpp from source. The compilation can take significant time and may require system-level development tools (e.g., CMake, compilers).
wget https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct-GGUF/resolve/main/qwen2.5-0.5b-instruct-q2_k.gguf
Place the .gguf file in your project directory (or update the code to reference the correct path).
This installation process is slow, error-prone, and highly dependent on the student’s local machine configuration (CPU, RAM, OS, compiler availability).
Many students in CS1 may not have the technical background to troubleshoot these issues, making this option inappropriate for beginners.
For this reason, we recommend local installation only for institution-managed lab environments, where dependencies can be pre-built and standardized.