Local LLM Chatbot - CS1 Assignment

Jason Madar / jmadar -at- langara.ca

While all the assignment resources are provided in the bundled zip file, we believe what makes this assignment truly nifty is its zero-cost, one-click setup infrastructure. To demonstrate this, please visit the github repo: https://github.com/env3d/cs1-llm-local-chatbot

A quick start to help students get started:

Metadata

HeaderContent
SummaryLocal LLM Chatbot — students build a simple command-line chatbot powered by a local large language model (LLM). The one-click GitHub Codespaces setup removes all installation barriers. Students practice lists, dictionaries, loops, and conditional logic while discovering the stateless nature of LLMs and implementing conversation memory themselves.
TopicsA practical application of core Python data structures (lists, dictionaries), loops, and conditionals in the context of AI interaction and conversation management. Also introduces concepts of prompt engineering, context windows, and external state management for LLMs.
AudienceAppropriate for late CS1 or early CS2 students with basic Python programming knowledge.
DifficultyAn intermediate assignment, taking approximately 2–3 hours for CS1 students to complete.
StrengthsThe one-click, zero-cost setup means students can run a working AI chatbot in minutes, entirely in the browser. Engagement is high because the chatbot produces authentic and sometimes surprising interactions. The assignment naturally sparks curiosity about AI limitations (like poor math skills) and provides an authentic context for practicing programming fundamentals.
WeaknessesThe LLM’s responses can be inconsistent, which may confuse students without instructor guidance. Some students may become more focused on the novelty of AI than on the programming concepts.
DependenciesRequires a GitHub account and a reliable Internet connection for free access to GitHub Codespaces. No local installation or AI background knowledge needed. Works entirely in a web browser.
VariantsStudents can extend the chatbot with selectable personalities, file-based personality loading (practice with File I/O), adjustments to randomness via temperature parameters, or attempts at prompt engineering “jailbreaks.” The infrastructure also enables larger projects, such as multi-bot interactions or creative storytelling exercises like the “Infinite Story” assignment.

Details

What makes the Local LLM Chatbot assignment nifty are two intertwined innovations:

Together, these elements make the assignment engaging, accessible, and pedagogically powerful. Students not only implement and extend a working LLM-powered chatbot from the command line, but also confront the technical and conceptual realities of modern AI in a way that sparks curiosity and critical reflection—all within a 2–3 hour CS1-level exercise.

Files and Resources

The assignment includes the following files:

Why Local LLM

Local, small LLMs offer several advantages beyond privacy and cost, making them particularly valuable in educational settings. One key benefit is transparency. Unlike cloud-based LLMs, local models fail more visibly, allowing students to observe and analyze their behavior in a controlled environment. This transparency fosters a deeper understanding of how these models operate and where their limitations lie.

Another advantage is the absence of additional censorship layers. While local models can still be trained to avoid certain topics, they are not subject to external filters. This makes prompt engineering more straightforward, enabling students to experiment freely and achieve desired outcomes with less interference.

Finally, local LLMs are highly swappable. Students and educators can easily replace one model with another, facilitating experimentation and comparison. This flexibility encourages exploration and helps students grasp the nuances of different models and their applications.


APPENDIX: Local Installation (Advanced / Institution-Level Option)

While the recommended approach is the one-click setup with GitHub Codespaces, it is also possible to install and run the chatbot locally on a student or lab machine. This option is not required for the assignment, but is provided for completeness.

Steps for Local Installation

  1. Create a Python virtual environment (Python 3.9+ recommended):
python3 -m venv llm-env
source llm-env/bin/activate   # On Windows: llm-env\Scripts\activate
  1. Install llama-cpp-python:
pip install llama-cpp-python

Warning: This step will trigger a full build process of llama.cpp from source. The compilation can take significant time and may require system-level development tools (e.g., CMake, compilers).

  1. Download the model weights from Hugging Face:
wget https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct-GGUF/resolve/main/qwen2.5-0.5b-instruct-q2_k.gguf

Place the .gguf file in your project directory (or update the code to reference the correct path).

  1. Run the chatbot using the provided main.py and chat.py files.

Caveats

This installation process is slow, error-prone, and highly dependent on the student’s local machine configuration (CPU, RAM, OS, compiler availability).

Many students in CS1 may not have the technical background to troubleshoot these issues, making this option inappropriate for beginners.

For this reason, we recommend local installation only for institution-managed lab environments, where dependencies can be pre-built and standardized.