Cover of Teaching Philosophy

Teaching Philosophy


Special Issue of Teaching Philosophy

Guest Editor: Syed Abumusab

In the past few years, researchers have developed language models with the help of Big Data and what are now termed Large Language Models (LLMs). Recently, LLMs have shown the capacity to produce academic essays that could potentially be used by students in response to assignments in their undergraduate philosophy courses. An increasing worry for educators at all levels is the effect of LLMs’ wide accessibility on student learning, especially for writing-intensive courses across the curriculum. Many educators have recognized that the essays produced by programs like GPT-3 or Jasper or GPT-3’s new model Davinci-3 may be indistinguishable from human-written papers. Some have argued that writing is essential for educators to test their students’ capacity to think critically, as well as their ability to comprehend, engage with, and formulate arguments about complex texts. Skills like these are potentially at risk of being lost or degraded, similar to the deleterious effects of calculators on the skill of mental arithmetic.

On the other hand, like a calculator, perhaps LLMs can serve as a tool that frees students from the more routine tasks of writing conventional academic essays to have more time to spend thinking more deeply and in new ways about their coursework. Is this tool likely to be harmful or beneficial? Are the worries exaggerated? This special issue solicits articles that engage with the ways that LLMs may affect students, teachers, and other stakeholders within the realm of education.

For an initial discussion of some of these issues see John Symons ‘Conversation Stopper’ in Return Magazine at

We are interested in questions that include, but are not limited to, the following:

  • How important are writing-intensive courses to a student’s education? And how might LLMs affect the way we understand learning outcomes associated with writing?
  • Is the use of LLMs by students or teachers harmful or unethical? Does it matter?
  • Is there a way to incorporate LLMs into one’s pedagogy as a tool to help students become better writers?
  • How can LLMs assist in the classroom environment contributing to the student's capacity to think critically and deeply about their coursework? Or can they?
  • In the case of specific subjects, is the use of LLMs likely to harmful or benficial in different ways?
  • Can instructors judge a student’s philosophical or writing skills if the student has assistance from an LLM?
  • Can instructors design courses to ensure LLM usage is eliminated or minimized? Or is that unnecessary?
  • Are there any technical solutions to these quickly rising problems (digital watermarks, for example)? Is there any way to mitigate their use? Or is this a lost cause, and incorporating them is the path of least resistance?
  • If these tools become commodified, is it the case that marginalized students unable to access them be harmed disproportionately? If so, what are the ethical implications, and are there ways to minimize the harm? Or is there a possibility that tools like Jasper might provide students unable to access personalized tutors a helping hand, thereby leveling the playing field?

Submission Procedure

We anticipate final articles to run between 6,000 and 8,000 words, including an abstract of no more than 150 words. Submissions should be prepared for anonymous peer-review. For detailed formatting instructions please consult the journal's Submission Guidelines.

Manuscripts should be submitted by June 30, 2023, online at

Please note the “LLM and Teaching Writing Special Issue” in your cover letter. Inquiries about this call can be directed to the Editor, Maralee Harrell, at [email protected].