cmpttnl cnstrnt: An Exercise in Constraint and Prompt Engineering

This assignment, by Douglas Luman, is from the TextGenEd collection in the WAC Clearinghouse Repository.

The abstract from the site explains:

As new context-aware generative models challenge the human relationship to language, students benefit from first-hand observation of these models’ successes and limitations. Using these models often requires using “prompts” (natural language-based directions) to guide their output. The method of developing these directives has quasi-formalized into a practice known as “prompt engineering.” Serving as a gentle introduction to the intentionality, opportunities, and limits of the prompt engineering process, this work proposes and describes initial outcomes from an assignment that uses similarities between model prompting and the constraint-based literary work of the Ouvroir de littérature potentielle (“Oulipo”) to focus student attention on precision and specificity of prompts and their execution. Beyond familiarizing students with contemporary technologies (particularly OpenAI’s GPT platform) and the nascent practices developing around them, this assignment also aims to give students first-hand experience with the reflexivity of using language to describe language in preparation for larger conversations about language as a technology and the roles of large language models (LLM) in human expression.

Key Features of This Assignment

Introduction to Prompt Engineering
Students learn about the concept and practice of prompt engineering, which involves creating specific prompts to guide large language models (LLMs) in generating desired outputs. This skill is crucial for understanding how to effectively interact with and leverage AI technologies.
Application of Constraints
The assignment challenges students to apply constraints in their prompts, similar to the practices of the Oulipo literary group. By working within these constraints, students explore the limitations and creative possibilities of LLMs, enhancing their critical thinking and problem-solving abilities.
Hands-on Practice and Reflection
Students engage in hands-on activities where they create prompts, analyze the outputs generated by LLMs, and reflect on the outcomes. This iterative process helps students understand the nuances of AI-generated text and the importance of precise prompt engineering.