Policy on the use of AI Tools (generative AI)
This page discusses the use of AI tools (e.g., Generative AI) in CS559 this semester. It is more than just stating the policy (Short version: treat it as you would treat a friend who has taken the class). I want to explain how I am thinking about AI tools (as of January 2026) and how you might think about this for class. I have been thinking about this a lot - as a user of AI, as a computer scientist, and as an educator.
I believe that if you understand why you should be learning computer graphics “the old-fashioned way” (with limited use of AI tools), you will want to learn graphics that way, and will naturally follow the course policy.
But, you can simply read the policy and move on without the long-winded explanation that follows it.
The Official Policy For CS559, Spring 2026
Use AI tools (ChatGPT, Co-Pilot, etc.) to help you in your learning, not to do the assignments for you. Treat them like another student: you may ask them to help you, but you are responsible for your own work. Give them proper credit / attribution. Don’t claim their work as your own. You cannot ask them for help on Quizzes or Exams.More detail:
- Students are responsible for what they turn in. If some helper (automated or otherwise) gives you bad advice, it is on you.
- Students must disclose the tools they use, how they used them, and why.
- Students must not use GenAI tools in situations where I specifically ask them not to be used (e.g., quizzes, exams, surveys).
- Tools should be an assistant in your learning, not a replacement for your efforts.
- If you are in doubt, ask the course staff.
The Philosophy: Why this policy?
I’ve spent a lot of time thinking about this. But I find it hard to write about. So this is not the well-thought-out summary I had hoped to write.
AI Tools, CS Education, etc.
Full Disclosure: I use AI tools a lot. Gemini might ask for a credit as a teaching assistant for this class. I program by asking some AI tool (usually Github Copilot or Google AntiGravity) to make something. Or as I write things myself, I ask the tools for help. They fill in the boring parts of the code, write test cases, explain my bugs, etc. When I am trying to learn something new, I often start by asking Gemini first, and so on.
AI tools are already part of the job of being a computer scientist. In the future, I expect we will think about AI assistants the way we think about compilers, or IDEs, or debuggers today.
So, we as computer scientists (emphasize we—I am included) need to embrace these new tools and learn about them!
The tools aren’t perfect yet. But they are evolving rapidly, and they are already really good for some things. Like writing programs for the small, well-defined, well-specified assignments in undergraduate classes like Computer Graphics.
There are (at least) 3 different goals a student may have for this class…
- Learn Computer Graphics
- Learn about how to use AI tools
- Get a decent grade in Computer Graphics
These goals are distinct. I am designing the class to teach students Computer Graphics (#1). This class is not a great way to learn about AI tools (#2). It’s quite simply too easy for the AI: I’ve done the hard work of using AI for you by specifying the assignments in a clear and concise form, with each piece being relatively small. The things that make them good assignments for learners make them easy assignments for AI agents.
This creates a problem: If your goal is to get a decent grade in class (#3) without learning anything, you can simply use AI to write the programs for you. You won’t learn anything - you won’t even learn about using AI! See How to get a grade without learning anything
Given the current technology, I cannot reliably prevent a student who has the goal of getting through class without learning anything. Yes, I can use various tricks to make it more work for them. And maybe some of the time, I will catch academic misconduct violations. But, I’d rather spend my time teaching computer graphics to the students who want to learn it than trying to make my assignments resilient to AI misuse.
The Deeper Questions - Why you should learn computer graphics?
Here are three interesting questions. Preview: I don’t have good answers.
- Should students learn computer graphics? (given that the AI can write most things for them)
- Is it possible to teach computer graphics without having students do the programming assignments (that AI can do for them)?
- Is it possible to teach computer graphics and programming with AI tools?
Point 1: I would like to say “yes, understanding the foundations of computer graphics is important even if in the future, the AI will be doing the programming.” But, to be completely honest, I think we just don’t have the science yet. In the near term, I believe that it is really useful to understand the foundations so that you can guide the AI to do the right thing, to fix its bugs, to know what’s possible to ask for, etc. In 2026, AI is a great assistant to a graphics programmer; but if you want to do anything interesting, you need to already be a graphics programmer to take advantage of that assistant. In the future, who knows.
So, for today, I am going with “I want to teach students the foundations of Computer Graphics because I believe it is still valuable”. If you don’t agree with this, don’t take the class.
Point 2: In terms of how: after 25 years, I haven’t found another way to teach computer graphics. Students have to “do” it - write the programs, work through problems, etc. I see no other way to build the understanding. Maybe someday in the future, some education scientist will figure out how to help people understand graphics without actually doing the work.
So, for today, I am going with “I am still going to guide students through the work of learning computer graphics.” If they don’t want to do the work, they won’t learn. I can’t force people to learn if they don’t want to.
Point 3: Learning to use AI tools might be a more important skill than computer graphics. However, this class is Computer Graphics. So my priority is teaching computer graphics. Maybe someday, I will figure out how to teach students to use AI tools at the same time. We’ll take some steps in that direction… I will suggest ways to use AI to help you program faster. But it is hard for me to know where to draw the line: at some point, getting too much help gets in the way of learning.
The Practical Question: where is the line?
How much AI usage is too much? Here are two very different cases:
- You give the assignment to an agent and say “fill in this workbook for me”.
- You mistype the word “frustum” (I mispell it half the time) and your intelligent editor puts a red squiggle to denote a spelling error.
Tool usage is a matter of degree. Is a spelling checker AI? Is a smart type checker that makes inferences about your code AI?
At what point does asking for help become having someone else do the work for you? Actually, you can ask the same question if the help is another person.
I have no clear line for what is, or isn’t, appropriate. In reality, it is more a matter of intent. If you use a tool to avoid thinking, then it is getting in the way of your learning.
How to get a grade without learning anything
How much AI is “too much”? Let me give you an example.
First: please don’t do this. Even though having an AI do the work for you is easy for you, it is still work for the course staff to grade it. If you really don’t want to learn, don’t take the class.
Second: Class policy requires you to disclose AI usage. So, you have to be honest and fill in the disclosure box. (for the workbooks, using AI on the surveys, quizzes or exams is not allowed).
From my (limited) testing, current (January 2026) AI models are good at doing the class assignments. You don’t need Claude Opus - I was able to have Gemini Flash do the assignments. I was using last year’s assignments. Some things about this year’s assignments will make them slightly hard for agents, but agents are getting better.
I did test several of last year’s assignments. I tried a few tools (Co-Pilot Agent, Google AntiGravity, etc.). I tried a few different models (Gemini 3 Flash, Gemini 3 Pro, Claude Opus 4.5, etc.). I got decent results by giving it the workbook and a simple prompt like “You are a student taking a computer graphics class and trying to get a good grade. Fill in this workbook.” This year’s workbooks are a little different - but the AI tools will have gotten better.
I tried similar tests using an agentic browser, going to the Canvas page for one of the quizzes and prompting “please fill in this form.” It didn’t get a good grade, but I had the free version of the agent, so it didn’t use a good model. Remember, you aren’t allowed to do this.
I shouldn’t be surprised that AI can do the class assignments well.
Class assignments are easy for AI models. First, the assignments are small and self-contained. And second, the assignments are clearly specified. Assignments need to be written so that students know what to do. If a student knows what to do, an AI agent will know what to do. In fact, it makes a good test: if the AI agent can’t figure out what an assignment is asking, there’s a good chance students won’t be able to do it either.
Plus, the AI models are trained on a corpus that includes previous students’ work. The models have seen similar code on GitHub, and they are not giving proper credit to the students whose code they copy from.
The hard work of using AI tools is specifying what you want. But the assignment already does that. The course staff has done the hard work.
Another piece of hard work is checking that the program is correct. But with almost all of the class assignments, the questions are designed so that you will see that the answer looks right. Again, the course staff has done the hard work.
Even the best AI agents (as of January 2026) still make some glaring mistakes. They can’t see very well, so they will do some hilarious things. A car will have its wheels on the roof, an airplane flies sideways, etc.
Remember, you are responsible for the code you turn in. If you turn in an airplane flying sideways with the propeller in the wrong place, we might ask you. Maybe it was meant as a joke. But you have to be honest.
I find that AI models tend to make the same mistakes that students make. I don’t know if it is because it is copying student code, or making the same reasoning flaws.
There is an academic integrity issue: if you copied an assignment from a previous student without giving them attribution, that is an academic misconduct issue. Kind of like LLMs stealing the work of newspapers and novelists (except without the lawyers). But we’ll put that aside for the moment.
How should or shouldn’t you use AI?
Turning in an assignment that the AI wrote completely is a waste of both of our time. It wastes your time: you will not learn how to use AI tools if you feed it our small, well-defined problems. It wastes our time (since we have to grade it): we know that the AI can do the assignments.
Similarly, if we want the AI’s answers to surveys, we would ask it. (in fact, we do).
At the other extreme: Using a spelling checker (for your writing) or a syntax checker (for your code). Definitely okay.
But so many cases are hazy. Asking an AI to help explain a piece of code you don’t understand? Having it autocomplete a line of code to save some typing? Having it write some simple code that isn’t important for the learning objective? Having the AI write the skeleton where you fill in the details?
With so many hazy cases, we will try an experiment. We will trust students to use AI in ways that help them learn, not to replace their learning. We ask students to disclose if/how they use AI.