Course Project Brief

AI Exposure and the Impact on Work

Build a general framework for judging how artificial intelligence changes work. Turn that framework into a general checklist through co-design. Keep the scope tight, show the evidence clearly, and make sure the framework can travel across contexts.

Overview

In this project, you will build a general framework for judging how AI changes work. You will then turn that framework into a general checklist through co-design. Each team will choose one of three project types, complete a scoping review, give and receive peer review, and build a general checklist through a co-design process.

This project is about building decision frameworks, not about arguing that AI is always good or always bad. Your goal is to produce tools that help people make better and more defensible decisions about AI at work.

Why This Project Matters

AI can change work through exposure. Here, exposure means that occupational tasks can be affected by AI tools such as text generation, or coding assistance. Exposure does not mean full automation. AI can replace part of a task, support human judgment, reshape a workflow, or shift responsibility.

Because of this, decisions about AI at work are not only technical. They are also organizational, economic, and ethical. This project is meant to help you study those decisions in a clear and transferable way.

Choose One Project Type

Each team must choose one, and only one, of the following types. All team deliverables must stay aligned with that choice.

Type 1

Task delegation decision

Research question: What factors influence judgments that an AI-exposable task should or should not be delegated to AI (fully or partly)?

End product: A general checklist for deciding whether an AI-exposable task should be delegated to AI.

Type 2

Organizational adoption (risks and benefits)

Research question: What factors shape the expected benefits, risks, mitigations, and readiness when an organization adopts a specific AI use to augment/automate an occupational task?

End product: A general checklist for assessing the risks, benefits, mitigations, and feasibility of a specific AI use.

Type 3

Worker trust in AI

Research question: What factors determine whether a worker trusts AI to do a task, and when is that trust well calibrated?

End product: A general checklist for assessing whether trust in AI is warranted for a task and how that trust should be calibrated.

Generality Requirement

Write at the level of a general framework, not as a report on one occupation or one tool.

  • Your scoping review must synthesize factors that apply across occupations and organizations.
  • Your checklist must be general and transferable.
  • Profession-specific material should appear only as examples or in the worked use case.

A simple test is this: if another team working on a different profession could still use your framework with only small changes, then your framework is general enough.

Profession for the Worked Use Case

Each team must choose one profession from this tool. You may use only a profession from that tool or list. Use that profession only for brief examples, if useful, and for the required worked use case in the scoping review and the co-design report.

It is wise to choose a profession for which your team can reach at least a few people in that field. That will make the co-design stage easier and will make your worked use case more realistic.

Deliverables
Deliverable 1

Scoping Review (Team)

Purpose

Your scoping review should map and synthesize the literature that fits your chosen project type. The goal is to identify the main factors, organize them into a clear framework, and show how the framework can be used.

Submission: Maximum length: four pages in double column ACM format. References and appendix do not count toward the four-page limit.

What the scoping review must do

  • Address one chosen project type only.
  • Synthesize evidence at a general level.
  • Identify and organize the main factors.
  • Explain what is known, and where the gaps are.
  • Include a short worked use case that applies the framework to your chosen profession.

Minimum evidence requirements

  • 15 to 25 sources in total.
  • At least eight empirical sources (qualitative, quantitative, or mixed methods).
  • At least three credible practice sources, such as standards bodies, professional associations, major policy reports, or respected industry research.
  • Default time window: 2015 to the present. You may use older sources if they are foundational, but you must explain why.

Method transparency requirements

Your appendix must include the following:

  • Databases or search engines used.
  • Dates searched.
  • Search strings or keywords.
  • Inclusion and exclusion criteria.
  • Screening process.
  • Coding scheme (how you grouped the factors).

This is a scoping review, not a full systematic review.

Deliverable 2

Peer Review of a Scoping Review (Individual)

Purpose

You will review another team’s scoping review and give feedback that could improve it.

Submission: Maximum length: one page.

What your peer review must include

  • Top strengths (no more than three bullet points).
  • Top weaknesses (no more than three bullet points).
  • At least five actionable improvements. State what should change and why.
  • Your review could focus on a judgment on whether the scoping review is general and transferable, whether it stays aligned with its chosen project type, or whether its use case shows real application.
Deliverable 3

Co-Design Checklist and Report (Team)

Purpose

Using your scoping review as a base, you will co-design a checklist and report. This deliverable turns your review into a usable decision tool.

Submission: Maximum length: four pages in double column ACM format. References and appendix do not count toward the four-page limit.

What you must produce

You must produce a general checklist that fits your chosen type.

  • Type 1: Should this AI-exposable task be delegated to AI?
  • Type 2: Should this specific AI use augmenting/automating the task be adopted, given the expected risks, benefits, mitigations, and readiness?
  • Type 3: Is trust in AI warranted for this task, and how should that trust be calibrated?

The checklist itself must stay general. Use profession-specific content only in the worked use case.

Co-design requirement

You must involve three to six participants drawn from at least two stakeholder groups.

Possible stakeholder groups include:

  • Practitioners in the chosen profession.
  • Managers or team leads.
  • Risk, compliance, or legal staff.
  • Technical implementers.
  • End users or clients (optional).

Process and appendix

  • One workshop (45 to 60 minutes).
  • Or three interviews (about 20 minutes each).
  • Show iteration: checklist version 1, stakeholder feedback, and checklist version 2.

Appendix:

  • Interview or workshop guide.
  • Checklist version 1.
  • Checklist version 2.
  • Anonymized notes.
Deliverable 4

Peer Review of a Co-Design Report (Individual)

Purpose

You will review another team’s co-design report and assess both the process and the checklist that resulted from it.

Submission: Maximum length: one page.

What your peer review must include

  • Top strengths (no more than three bullet points).
  • Top weaknesses (no more than three bullet points).
  • At least five actionable improvements.
  • Your peer review could assess whether the co-design report shows real iteration (version 1 to version 2), and whether the checklist is usable (clear, answerable, and actionable items) and truly general.
AI Tool Use Policy

You may use AI tools for brainstorming, editing, or organizing your work. If you do, you must disclose that use. Each team appendix must include an AI Use Disclosure that states:

  • Which tools were used.
  • How they were used.
  • What verification the team performed.
  • That team members read and checked all cited sources.

Failure to disclose AI tool use may be treated as a violation of course policy, leading to a zero grade policy.

Grading
10%Vignettes or Future Wheel Foresight (individual)
30%Deliverable 1, Scoping Review (group)
10%Deliverables 2 and 4, Peer Reviews (individual)
30%Deliverable 3, Co-Design Checklist and Report (group)
10%Video with critical reflections on the sector (group)
10%Design Court debate (group)

To pass the course, students must earn at least 18 out of 30 on each component.