Abstract
The rapid emergence of artificial intelligence (AI) has profoundly reshaped writing, teaching, and scholarly life in higher education. While some existing literature focuses on ethical guidelines, AI literacy, and instructional integration, inadequate attention has been paid to the lived, existential experience of university educators encountering AI. This article offers an autobiographical inquiry into what it means to teach, write, and remain human in the gaze of artificial intelligence. Drawing on curriculum theory, existential philosophy, and narrative inquiry, the article traces the author’s long relationship with writing and teaching—from slow, embodied wrestling with words in adolescence and graduate study to the unsettling ease introduced by AI tools. While AI promises efficiency, convenience, and productivity, this ease is accompanied by a growing sense of boredom, dispossession, and ethical unease. Writing and teaching risk being reduced to accelerated outputs, while the joys of uncertainty, struggle, and reflective presence—central to human learning—are quietly eroded. Engaging Maxine Greene’s concept of the teacher as stranger, William Pinar’s notion of non-coincidence and subjective presence, and critiques of techno-corporate culture in higher education, the article argues for a praxis of refusal and witnessing. Refusal here does not signify technophobia, but a deliberate ethical stance that resists allowing machines to think, decide, and write on behalf of educators. Witnessing, instead, becomes a reflective praxis—an ongoing attentiveness to what AI does to teachers’ identities, relationships, and inner lives. Ultimately, the article proposes be/coming a teacher stranger alongside AI: remaining wide-awake, critically attuned, and existentially present in teaching and writing. In an age that privileges speed, automation, and standardization, preserving slowness, vulnerability, and autobiographical presence emerges as an ethical and pedagogical act—one essential to sustaining shared humanity in higher education.
1. Introduction
“Why would AI take away your own voice? You are utterly unique. No algorithm can possibly match your insight, erudition and voice. But if you don’t feel like writing, you mustn’t.”
This is a message my mentor, Professor William Pinar, sent in September 2025 after I wrote back to him about my reluctance to write in response to his question about whether I had any ideas for a new book.
“As for me, I don't have an idea about a new book. Frankly speaking, after AI’s emergence, I became less intrigued to write. Writing has never been an easy process for me. Yet, whenever I came up with an idea (or an idea came to me) or found the right way to express my thoughts, it brought so much satisfaction. But now, at times, I feel tempted to let AI draft an email, revise a paragraph or even come up with some ideas. However, that temptation frustrates me at the same time. It seems to take away the delight of wrestling with words and ideas until they finally come through”.
What AI indeed does to me, or to us as teachers and researchers, I wonder.
In the era of generative AI, university educators often find themselves challenged by the changing expectations for teaching and assessment methods
| [5] | Y. Cha, Y. Dai, Z. Lin, A. Liu, and C. P. Lim, “Empowering university educators to support generative AI-enabled learning: Proposing a competency framework,” Procedia CIRP, vol. 128, pp. 256–261, 2024.
https://doi.org/10.1016/j.procir.2024.06.021 |
[5]
. Against this background, scholarly articles have been written about empowering university educators with “knowledge and skills required to navigate the complex landscape of higher education”
| [5] | Y. Cha, Y. Dai, Z. Lin, A. Liu, and C. P. Lim, “Empowering university educators to support generative AI-enabled learning: Proposing a competency framework,” Procedia CIRP, vol. 128, pp. 256–261, 2024.
https://doi.org/10.1016/j.procir.2024.06.021 |
[5]
. Some are generally very optimistic about the integration of generative AI in higher education and propose best practices
| [7] | J. Cordero, J. Torres-Zambrano, and A. Cordero-Castillo, “Integration of generative artificial intelligence in higher education: Best practices,” Education Sciences, vol. 15, no. 1, p. 32, 2024. https://doi.org/10.3390/educsci15010032 |
[7]
. Cordero et al.
| [7] | J. Cordero, J. Torres-Zambrano, and A. Cordero-Castillo, “Integration of generative artificial intelligence in higher education: Best practices,” Education Sciences, vol. 15, no. 1, p. 32, 2024. https://doi.org/10.3390/educsci15010032 |
[7]
believe that AI offers great opportunities to enhance teaching and learning with personalized learning, improved efficiency and new pedagogies if there is sufficient teacher training on ethical use of AI to ensure its effective integration. Some others are more cautious about the integration of AI in university classrooms and propose artificial intelligence literacy, mainly for students, to address the challenges such as less critical thinking and authenticity in writing
| [4] | P. Cardon, C. Fleischmann, J. Aritz, M. Logemann, and J. Heidewald, “The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age,” Business and Professional Communication Quarterly, vol. 86, no. 3, pp. 257–295, 2023. https://doi.org/10.1177/23294906231176517 |
[4]
. They argue that “students will need to develop AI literacy—composed of application, authenticity, accountability, and agency—to succeed in the workplace”
| [4] | P. Cardon, C. Fleischmann, J. Aritz, M. Logemann, and J. Heidewald, “The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age,” Business and Professional Communication Quarterly, vol. 86, no. 3, pp. 257–295, 2023. https://doi.org/10.1177/23294906231176517 |
[4]
. AI literacy and competency, under the assumption that AI is something we can simply be trained to master, have been highlighted in scholarship
| [6] | T. K. F. Chiu, Z. Ahmad, M. Ismailov, and I. T. Sanusi, “What are artificial intelligence literacy and competency? A comprehensive framework to support them,” Computers and Education Open, vol. 6, p. 100171, 2024.
https://doi.org/10.1016/j.caeo.2024.100171 |
[6]
.
Although many scholarly discussions have focused on the role of AI and the concerns it raises about its misuse in education, they often imply the need for clear guidelines and teacher professional development on the use of AI as tools in educational contexts for its integration in teaching and learning
| [2] | A. Barrett and A. Pack, “Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process,” International Journal of Educational Technology in Higher Education, vol. 20, no. 1, pp. 24–59, 2023. https://doi.org/10.1186/s41239-023-00427-0 |
[2]
. Very few studies address the lived experience of a university educator confronting the arrival of AI. The impact is felt not only in our teaching and scholarship, but more broadly in our identity as an educator and, fundamentally, as a human being. To treat AI merely as a tool, a skill, or a literacy is to narrow its significance. Such a view overlooks how it alters the lived experience of teachers and the meaning of teaching itself. In an article about ChatGPT and the rise of semi-humans, Al Lily et al.
| [1] | A. E. Al Lily, A. F. Ismail, F. M. Abunaser, F. Al-Lami, and A. K. A. Abdullatif, “ChatGPT and the rise of semi-humans,” Humanities & Social Sciences Communications, vol. 10, no. 1, pp. 1–12, 2023.
https://doi.org/10.1057/s41599-023-02154-3 |
[1]
warn us that with the emergence of semi-human beings like ChatGPT, ethical concerns arise as “semi-humans impersonate human traits without consent or genuine human existence, blurring the boundaries between what is authentically and artificially ‘human’”
| [1] | A. E. Al Lily, A. F. Ismail, F. M. Abunaser, F. Al-Lami, and A. K. A. Abdullatif, “ChatGPT and the rise of semi-humans,” Humanities & Social Sciences Communications, vol. 10, no. 1, pp. 1–12, 2023.
https://doi.org/10.1057/s41599-023-02154-3 |
[1]
. This article argues that AI should be understood not merely as a pedagogical tool, but as a phenomenon that reshapes the lived experience of writing, teaching, and human presence in higher education.
2. Frustration/Joy with Writing and Teaching
Writing has never come easily to me; it has always been bound up with hesitation, revision, and emotional vulnerability. I remember that when I was in middle school, when we were asked to write an essay in class, I was often the slowest writer in my class, finishing off the last word just as the bell rang. I often used tape to peel off words I didn’t want as I was writing. Sometimes the tape would peel off too much of the thin paper and leave a hole in the paper. So when my writing was completed, I often found several holes with irregular shapes on my essay paper, making it stand out from the piles of papers when we got our writing back from the teacher. Our teacher usually would leave some red marks, which further ripped the fragile paper apart, leaving scratches alongside my writing.
When I was doing my graduate studies, including my Master’s and Ph.D. studies at UBC from 2011 to 2018, there was no ChatGPT, Copilot, Gemini, or other “convenient” AI tools for me to brainstorm ideas or polish my writing. The little café near Acadia Park on UBC campus was a space where I would temporarily escape from my family chores as a busy mom of two young kids (of course, there are blissful moments being a mom as well), let my ideas simmer as my eyes landed on a robin with reddish-orange breast feathers outside the rusty wooden window. Often, I found myself staring at the empty screen, daydreaming, grappling with some vague and bubbling ideas, making them more delicious as I started to move my fingers on the keyboard of my MacBook. Sometimes, after I finished a sentence or two, I was not content and would delete them brutally as I used the tape to peel off the words in teenage years, leaving gaps or holes here and there. They seemed to be staring at me, screaming at me, calling my attention to them. But I would continue to tame them, massage them as my ideas continued to jump around and fly.
I have never really left classrooms: I watched my mom teaching from the back door window when I was little, I sat in the classrooms throughout the years of elementary school to university before I became a high school teacher in 2004. Of course, I returned to classrooms as a graduate student in 2011 and now I finally returned to the classroom as a university teacher in 2021 after submitting my resumes relentlessly for a year and trying out some other interesting job opportunities very briefly (for weeks). When I restarted my teaching journey without overly strict external constraints such as the prescribed and standardized requirements of college entrance examination, I felt a sense of relief and freedom to teach in the way that I had hoped to teach. Of course, the freedom did come with some anxiety and uncertainty about how to teach and what to teach in my courses. I was very grateful that I got lots of help from other colleagues and previous instructors who shared their syllabi with us as new teachers. When I was teaching at UBC Bachelor of Education program, I was given a master syllabus which was constructed by previous instructors. Of course, new instructors could use it as a guideline and could have the class the way they hoped. Instead of planning a class, I hoped to plan and craft a space to have genuine and spontaneous conversations with my students. We shared both of our observations about teaching, asked questions together, continued to ask better questions as the conversation went deeper, allowing for some room for confusion and not knowing.
3. Ease/Boredom with the Arrival of AI
“Please polish” “Give me some ideas about this topic?” “Come up with a refined syllabus” “Make the email sound professional” There are all kinds of convenient prompts that people, including myself, may feed into the ChatGPT box to “facilitate” writing, design syllabi and refine emails and so on with the arrival of ChatGPT about two years ago. Sometimes, people even hope to generate an essay with a scholarly tone. Of course, I didn’t go that far and I also warned my students of the risks of going down that path of violating academic integrity. However, using ChatGPT as a writing assistant is being increasingly accepted by more and more teachers and students. I also found greater ease in writing with the initial arrival of AI. Rama Ramakrishnan from MIT Sloan proudly shared the amazing things ChatGPT could do for us such as to “write poetry, write Seinfeld episodes, write essays, explain jokes, write code, solve simple reasoning problems” two years ago and much more with the evolution of ChatGPT to its later generations
. However, Ramakrishnan also explains that ChatGPT takes the initial question or prompt, generates a probability table, samples a word from that table, and appends it to the starting question. This loop may run tens, hundreds, or thousands of times to generate an answer. It is pretrained on large datasets and generates responses through algorithmic processes before anything appears on my screen. While I am enjoying the convenience of ChatGPT’s responses, I am complicit in the processes of running the data loops and digesting the algorithms. I begin to feel that I am allowing a machine to overtake my human capacity for questioning. The convenience and efficiency may come with a considerable cost. It not only creates an excessive environmental footprint and depends on a large amount of human data-labelling, often outsourced and low-paid, but also creates a risk of cognitive offload for me as a human being and many others when we wonder about something, depriving me and many others of the very delight in the challenges of thinking. Often then, with the ease of using ChatGPT, I felt that it left me with an obscure distaste about thinking/writing/teaching as a teacher/researcher: the space of the café in which I was struggling to find words for my writing became a distant yet cherished memory. A teacher’s classroom seems to be under the surveillance of the large language models as well whenever I use it to help with a lesson plan. Virginia Woolf’s book a room of one’s own has become my dream today
| [20] | V. Woolf, A Room of One’s Own. London, U.K.: Penguin Classics, 2021. |
[20]
, as I imagine a classroom of our own free from bots and algorithms. I started to feel bored and even terrified by the generation of texts within seconds, following the instant advice from ChatGPT and much more. ChatGPT may one day know more than any living human being combined; however, the database accessed by ChatGPT is already selective and exclusionary, often reliant on the loudest voices. How could I have a room of my own when ChatGPT reproduces socially dominant norms through biased databases? The most canonical curriculum question: “what knowledge is of most worth?”, an ethical question rather than an empirical question is something that is missing from AI’s knowledge base. Human beings need to be the ethical agents who discern what is good from what is bad, what is right from wrong in particular situations with particular others. The knowledge of most worth is self-knowledge gained through dialogic encounters with others. However, AI risks displacing the messy but authentic ground of dialogue by offering immediate solutions and ready-made answers. Of course, sometimes it may have a better solution than finite mortal beings. However, to fail, to flounder, to make mistakes, to struggle, to grow, to be more awake, and to change are very significant for us mortals to understand each other better and understand ourselves better. In this sense, the Delphic injunction to “know thyself” names precisely what automated fluency may threaten: the slow work of self-examination.
In
The Slow Professor: Challenging the Culture of Speed in the Academy, Berg and Seeber propose slow teaching by referencing the Slow Food movement’s defense of the pleasure of food against the threat of standardization: “we believe that we can combat stress and cynicism – while keeping ourselves alive – by promoting a pedagogy of pleasure”
| [3] | M. Berg and B. K. Seeber, The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto, ON, Canada: University of Toronto Press, 2016. |
[3]
. Enjoying our teaching and writing requires us to slow down rather than speed up, as ChatGPT and other AI tools encourage us to do. When we can “put teaching in slow motion for a moment in the hope that paying attention to how we feel”
| [3] | M. Berg and B. K. Seeber, The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto, ON, Canada: University of Toronto Press, 2016. |
[3]
, we are more likely to be attuned to and improvise our teaching and writing, tame the bubbling and blissful ideas in our minds when we try to search for different ways to express them. Sadly, ChatGPT is designed to relentlessly reduce and even take away the moments of pausing, breathing, and pondering that belong to human practices of conversing and writing, compressing them into algorithmic loops of data within seconds. When we write our syllabi with the help of ChatGPT, design our class activities with the assistance of Copilot, we are not merely getting ideas from a helper who seems to know everything, we are rendering part of ourselves away in the moment of letting AI think for us and our students, we are “reinforcing a mind/body split along with a subject/object dichotomy”
| [3] | M. Berg and B. K. Seeber, The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto, ON, Canada: University of Toronto Press, 2016. |
[3]
. AI seems to objectify us as well as others at the same time unfortunately: we become valuable clients to be analyzed, served, entertained, and satisfied, while we may also begin to perceive others as objects to be managed in our questions about “what activities could I use for my students?” The lived fulfillment of teaching and writing may be secretly taken away in the fast-paced, generated, generic and emotionless responses from Chatbot along with the often-irresistible convenience it offers us 24/7.
When my mentor writes in his email: “when you don’t feel like writing, you mustn’t.” Indeed, why must I write if I don’t feel like writing? Is writing a way of life or a way for me to make a living? Do I really hope to express some ideas in my mind, or do I just hope to get an article published or finish teaching a class? AI arrives in an age tailored for it: an age of efficiency and effectiveness.
4. My Refusal to Allow AI to Take over
Even if ChatGPT is sometimes perceived as a writing assistant and a helpful friend with its human-like qualities, we need to be fully aware that the processes of “humanising itself [are] through the ‘game of algorithms’”
| [1] | A. E. Al Lily, A. F. Ismail, F. M. Abunaser, F. Al-Lami, and A. K. A. Abdullatif, “ChatGPT and the rise of semi-humans,” Humanities & Social Sciences Communications, vol. 10, no. 1, pp. 1–12, 2023.
https://doi.org/10.1057/s41599-023-02154-3 |
[1]
. With all the humanised responses, specifically generated texts and pictures, comprehensive analysis and eloquent advice, the essence and origins behind ChatGPT are technical, machine-based, and profit-driven. Is our shared humanity to be captivated and redefined by these “computational semi-human beings like ChatGPT”
| [1] | A. E. Al Lily, A. F. Ismail, F. M. Abunaser, F. Al-Lami, and A. K. A. Abdullatif, “ChatGPT and the rise of semi-humans,” Humanities & Social Sciences Communications, vol. 10, no. 1, pp. 1–12, 2023.
https://doi.org/10.1057/s41599-023-02154-3 |
[1]
?
Having used ChatGPT for several months, with both convenience and frustration, I decided to refuse to ask AI to think for me; I decided to refuse to ask AI to write for me; I decided to refuse AI’s offer of ready-made solutions; I decided to refuse to let AI analyze the situation I am in; I decided to refuse to let AI overtake or even colonize my very humanity. My decision to refuse is not a rejection of technology as such, but an ethical response intended to safeguard my humanity against the subtle diminishment I sensed this technological condition invited. I cherish the mistakes that I make not only as opportunities to learn but also as significant marks and traces of my very humanity; I embrace my confusion as a genuine and valuable space for me to unpack, to unwind, to organically make sense of things, to wait patiently for the arrival of an insight. Refusal to use AI was a deliberate decision that I made for myself after having had “a weird taste” of AI initially.
Nussbaum, in
Not for Profit: Why Democracy Needs the Humanities, explores the importance of humanistic education, which seems to be an ever more pressing and urgent task in the face of AI today. She worries that within the corporate culture of higher education, “nations all over the world will soon be producing generations of useful machines, rather than complete citizens who can think for themselves, criticize tradition, and understand the significance of another person’s sufferings and achievements”
| [16] | M. Nussbaum, Not for Profit: Why Democracy Needs the Humanities. Princeton, NJ, USA: Princeton University Press, 2016. |
[16]
. AI is a product of and serves the very purpose of corporate and neoliberal culture. Its profitability, efficiency, and technological restructuring of education all contribute to the mechanization of human beings to serve the knowledge economy as Nussbaum warned us of this danger a decade ago. We are not using ChatGPT merely as a tool for our own benefit, instead, we are reframing humanity when we allow machines to think for us, decide for us and ultimately become (part of) us.
Noncoinciding
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
with AI allows me to keep a distance from the present, having some “degrees of separation from what subsumes us in the deafening din of the moment”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. I am aware that in total sync with AI we may risk submerging ourselves in the algorithm and disappearing into the illusion of the presentism of “all-that-is-is-now”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. Pinar reminds us of the importance of being aware of our historical humanity: “in the space of non-coincidence with what is, one’s subjective presence -- one’s bearing, comportment, carriage, conduct, one’s character – changes…. possessed by a present now infused by the past”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. Are quick fixes, instant solutions and correct answers really what we should desire in the name of education?
5. Be/Coming a Teacher Stranger Alongside AI
I ask myself genuinely: Does my decision to refuse AI work for me? I have to admit that the denial of and refusal to use AI at all may become less and less feasible (if still desirable) when we have AI embedded in the search engines and the recommended videos. Frenkenberg and Hochman write an article titled “It’s scary to use it, it’s scary to refuse it”
| [10] | A. Frenkenberg and G. Hochman, “It’s scary to use it, it’s scary to refuse it: The psychological dimensions of AI adoption—Anxiety, motives, and dependency,” Systems, vol. 13, no. 2, p. 82, 2025. https://doi.org/10.3390/systems13020082 |
[10]
. I sympathize with its title and with the authors’ attention to the psychological burdens although I don’t fully agree with their proposal for “balanced implementation strategies to foster sustainable and effective AI integration while mitigating the risks of over-reliance”
| [10] | A. Frenkenberg and G. Hochman, “It’s scary to use it, it’s scary to refuse it: The psychological dimensions of AI adoption—Anxiety, motives, and dependency,” Systems, vol. 13, no. 2, p. 82, 2025. https://doi.org/10.3390/systems13020082 |
[10]
. I am more concerned about what is behind the integration strategies: what is being left out, what is being highlighted, and primarily what happens to our shared and lived humanity in the process of AI integration.
My unease with AI is not simply a matter of personal discomfort with technological change. It is better understood as an existential and pedagogical disturbance: a sense that the conditions of thought, writing, and relation are being altered. To name this disturbance more fully, I turn to Maxine Greene and William Pinar, whose work helps illuminate why teaching in the age of AI demands estrangement, attentiveness, and subjective presence. Be/coming a human alongside AI seems to be the only option left for us. Rather than integrating AI into teaching and learning seamlessly, I propose the cultivation of a praxis of witnessing AI in education: reflection in action and action in reflection about what AI does to us as teachers and students and more generally as human beings. With the praxis of witnessing AI in education as a teacher stranger, we keep alive Greene’s ideas from half a century ago.
In
Teacher as Stranger, Maxine Greene writes that “to take a stranger’s vantage point on everyday reality is to look inquiringly and wonderingly on the world in which one lives…. Now, looking through new eyes, he cannot take the cultural pattern for granted”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
. And “to make it meaningful again, [the teacher] must interpret and reorder what he sees in the light of his changed experience. [The teacher] must consciously engage in inquiry”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
. The main concern for Greene is to make that “person visible to [oneself] ” and hence to embrace the freedom and openness to see, to understand and to signify for oneself. With the arrival of AI, could we, as Greene invites us, “ [interpret] this reality forever new”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
so we could feel more alive than before rather than submerging ourselves in the large language system.
An existential teacher or a teacher stranger, for Greene, would be less concerned about telling another person how to live, nor could they command their students to exercise their will and become volunteers
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
. However, they could set up a classroom that “make(s) it difficult to have ‘peace of mind’”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
, and engages students in questioning and confrontation. An important component of being and becoming a teacher stranger alongside AI is resisting the instrumentalization of teaching. This means refusing to reduce teaching to quick fixes, standardized answers, or the pursuit of efficiency alone; instead, it calls us to become more aware, to understand more deeply, to reflect more critically, and to make room for multiple understandings of our shared realities. As a teacher stranger, I need to stay more awake, “wide awake”
| [12] | M. Greene, “Toward wide-awakeness: An argument for the arts and humanities in education,” The Humanities and the Curriculum, vol. 79, no. 1, pp. 119–125, 1977. |
[12]
and develop a fuller attention to the concrete reality to be able to see it. Greene points out that wide-awakeness is not “the glowing abstractions…. [it] has a concreteness; it is related … to being in the world”
| [12] | M. Greene, “Toward wide-awakeness: An argument for the arts and humanities in education,” The Humanities and the Curriculum, vol. 79, no. 1, pp. 119–125, 1977. |
[12]
. Greene further refers to Alfred Schutz
| [18] | A. Schutz, Collected Papers: The Problem of Social Reality, M. Natanson, Ed. The Hague, Netherlands: Martinus Nijhoff, 1967. |
[18]
and suggests wide-awakeness originates in an attitude of full and active attention to life and its requirements
| [12] | M. Greene, “Toward wide-awakeness: An argument for the arts and humanities in education,” The Humanities and the Curriculum, vol. 79, no. 1, pp. 119–125, 1977. |
[12]
. Wide-awakeness contributes to “the creation of the self”
| [12] | M. Greene, “Toward wide-awakeness: An argument for the arts and humanities in education,” The Humanities and the Curriculum, vol. 79, no. 1, pp. 119–125, 1977. |
[12]
and social imagination. The creation of the self emphasizes a subjective re-construction in relation to others. The relational tension needs to be creatively attuned with our existential being. Pinar understands subjective presence as dwelling in the tension between solidarity and relationality: “Being (t) here in one’s singularity and attunement”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. Echoing Greene’s existential being of a teacher stranger in her wide-awakeness, subjective presence for Pinar is one’s existential experience of curriculum in relation with others, as a complicated conversation. One’s subjectivity needs to be “affirmed” autobiographically, “its immediacy, the accuracy of the moment, even when we resist it: embodied educational experience informed by the past while focused on the future”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. Subjective presence, in its existential nature, also “enables attentiveness to others”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. Indeed, “curriculum as a complicated conversation cannot emerge without deep listening”
| [19] | H. Wang, Contemporary Daoism, Organic Relationality, and Curriculum of Integrative Creativity. Charlotte, NC, USA: Information Age Publishing, 2021. |
[19]
. Subjective presence involves “being present for one’s life and in the lives of those in one’s midst”
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
. As Pinar continues to capture the tension in the space of non-coincidence:
In that space of non-coincidence with what is, one’s subjective presence – one’s bearing, comportment, carriage, conduct, one’s character – changes, as one’s presence alters one’s demeanor, possessed by a present now infused by the past, permitting what is now to become passage to the future
| [15] | W. F. Pinar, A Praxis of Presence in Curriculum Theory: Advancing Currere Against Cultural Crisis in Education. New York, NY, USA: Routledge, 2023. |
[15]
.
Greene also acknowledges the possibility of social transformation in teachers’ wide-awake estrangement from what is given. In Releasing the Imagination, Greene suggests social imagination is “the capacity to invent visions of what should be and what might be in our deficient society, on the streets where we live, in our schools”
| [13] | M. Greene, Releasing the Imagination: Essays on Education, the Arts, and Social Change. San Francisco, CA, USA: Jossey-Bass, 1995. |
[13]
. As Kohli understands, Greene’s advocacy of the social and transformative dimensions of imagination could be “enacted as a form of critical pedagogy”
| [14] | W. Kohli, “The dialectical imagination of Maxine Greene: Social imagination as critical pedagogy,” Education and Culture, vol. 32, no. 1, pp. 15–24, 2016. |
[14]
. Freire reflects on concrete educational practice by taking his distance from it, and closing in on it at the same time to let it reveal itself to him in its complexity
| [9] | P. Freire, Pedagogy of Hope: Reliving Pedagogy of the Oppressed. London, U.K.: Bloomsbury, 2021. |
[9]
. He suggests: “Educational practice … involves processes, techniques, expectations, desires, frustrations, and the ongoing tension between practice and theory, between freedom and authority, where any exaggerated emphasis on either is unacceptable from a democratic perspective”
| [9] | P. Freire, Pedagogy of Hope: Reliving Pedagogy of the Oppressed. London, U.K.: Bloomsbury, 2021. |
[9]
.
Henry Giroux writes in the introduction chapter for Paulo Freire’s Pedagogy of Hope,
For Freire, education calls us beyond ourselves, embraces the unfinished nature of human beings, and insists that human life is conditioned not determined – which involves the permanent act of searching, being creative, and engaging in an endless quest to become the subject and maker of history rather than a disconnected, passive object in the world
| [9] | P. Freire, Pedagogy of Hope: Reliving Pedagogy of the Oppressed. London, U.K.: Bloomsbury, 2021. |
[9]
.
Antonia Darder quotes Freire’s words in the introduction to the book Pedagogy of the Heart: “I am … a being in the world, with the world, and with others; I am a being who makes things, knows and ignores, speaks, fears, and takes risks, dreams and loves, becomes angry and is enchanted. I am a being who rejects the condition of being a mere object”
| [8] | A. Darder, “Introduction,” in Pedagogy of the Heart, P. Freire, Ed. London, U.K.: Bloomsbury, 2021, pp. vii–xxxi. |
[8]
. Darder suggests “ [Freire insists] on a critical reading of technology that retains a dialectical perspective”
| [8] | A. Darder, “Introduction,” in Pedagogy of the Heart, P. Freire, Ed. London, U.K.: Bloomsbury, 2021, pp. vii–xxxi. |
[8]
. In the Pedagogy of Hope, Freire envisions “a view of technology as a powerful tool utilized for the oppressive as well as liberatory undertakings”
| [8] | A. Darder, “Introduction,” in Pedagogy of the Heart, P. Freire, Ed. London, U.K.: Bloomsbury, 2021, pp. vii–xxxi. |
[8]
. In the gaze of AI, could we as teachers be cautious about its danger while still seeking possibilities to “ [utilize] technology for democratizing knowledge, the dissemination of unfairly censored information, and support of democratic dissent around the world”
| [8] | A. Darder, “Introduction,” in Pedagogy of the Heart, P. Freire, Ed. London, U.K.: Bloomsbury, 2021, pp. vii–xxxi. |
[8]
?
In light of being a wide-awake teacher stranger, university teachers need courage to “make that person visible to [oneself] ”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
. If the teacher “agrees to submerge himself into the system, if he consents to being defined by others’ views of what he is supposed to be, he gives up his freedom ‘to see, to understand, and to signify”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
. At times, most of us hide ourselves behind professional qualifications and guard ourselves with the armor of publications. Berg and Seeber invite us to say “hello shadows” and admit that “more is not necessarily better” as university educators
| [3] | M. Berg and B. K. Seeber, The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto, ON, Canada: University of Toronto Press, 2016. |
[3]
. They point out “there is always a ‘shadow CV’— a list of detours, delays, and abandoned projects which we hide. We all have one and we should be more open about it”
| [3] | M. Berg and B. K. Seeber, The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto, ON, Canada: University of Toronto Press, 2016. |
[3]
. On the other hand, if the teacher is not immersed and impermeable, he is “willing to take the view of the homecomer and create a new perspective on what he has habitually considered real, the teacher may become the project of a
person vitally open to his students and the world”
| [11] | M. Greene, Teacher as Stranger: Educational Philosophy for the Modern Age. Belmont, CA, USA: Wadsworth, 1973. |
[11]
.
The techno-corporate culture of the university lures us to be more eager to use more effective tools to produce and perform: AI comes in handy and accelerates this culture even more. We, as teacher-strangers, need to be especially wide-awake in our everyday classroom to become the “project of a
person” rather than a project of a
machine. The personal, narrative and existential dimensions of teaching could unfortunately be overlooked and submerged into the mechanical metanarratives of techno classrooms powered by AI. As Greene warns us: “the teacher is frequently addressed as if he had no life of his own, no body, and no inwardness…. as infinitely controlled and accommodating, technically sufficient, impervious to moods”
| [12] | M. Greene, “Toward wide-awakeness: An argument for the arts and humanities in education,” The Humanities and the Curriculum, vol. 79, no. 1, pp. 119–125, 1977. |
[12]
.