And now for something completely different: Musings on solitude and loneliness

alternate title: Morgan can connect almost any topic back to cows

I grew up with cattle, a herd animal. You can’t keep a single cow by themselves – not if you want it to be happy. As a farm kid, summers were cattle show season, and while most went off to green pastures the show stock always had a few others stick around in the small pasture off the farmyard to keep them company (and to prevent escape artists). While humans have adapted to independence more than our bovine brethren – probably because of our positions as predators – we are still meant to live with other humans. We do live in a society, after all. We are completely reliant on our caregivers for much of our childhood, and we’re kidding ourselves if we don’t admit that it carries into our adulthood as well. Solitary confinement (a misnomer if there ever was one) is considered cruel and unusual punishment.

To me, the difference between solitude and loneliness, must be the product of the quiet, forced contemplation of the hunter – a time to gather one’s thoughts and sit in the silence required to hunt large prey – so that you can come back to your people with both food and as a better member of your society. Solitude centres us. The voice of loneliness is one of failure though. Being cast away has been a traditional societal punishment, from Greek ostracism to the British penal colony of Australia and the Gulags of Stalinist Russia, or the LGBT child exiled from their homes or sent to conversion therapy. The message? Others don’t want you around. You have no value. Others are better off in your absence.

Solitude is empowering. Loneliness is not.

One of the most interesting things about the time we live in is how our technological structure enforces loneliness, but not solitude. The very real fear we have of judgement means that the things we would not be willing to express in a classroom or work conversation, many of us will say through the distance of our device screens. Thus, we both feel not alone, and tear up the social contract of being a member of a group of people at the same time. And if the only place we can be our true selves is removed from the people themselves, that strikes me as the loneliest thing of all.

A note on this post

Sometimes I like to mess around with LLMs, just to see what it will give me. Today, I gave Claude the following prompt, and went back and forth a few times to test out what I would come up with, and what it would have to say about what I had to say:
“I am worried about my use of AI as replacing my creative and critical thinking skills. Can you ask me some questions for me to free ball answers to in short paragraphs? And rate my answers based on their novelty, and other stereotypical things like grammar and word choice? Ideally, you’ll have plotted your answer to the question as well, but not reveal it to me before you answer.”
I was a bit vague on my criteria, but what it came up with is novelty, specificity, word choice and structure on a scale out of 10 (whatever that means). A couple of them are actually, maybe even probably, worth fleshing out into something more. And since you, dear blog, have been a bit abandoned by my lack of grad school assignments – perhaps this is a good home for them. They were interesting exercises for drafting and brainstorming and seeing where a random question takes me.
The fourth question was the following: What is the difference between loneliness and solitude? I have taken my initial flow of consciousness draft and developed it into something more polished above

What now, brown cow?

Yesterday I handed in my last assignments for my last course in my Master of Educational Technology degree through the University of British Columbia. While I won’t graduate until May, it marks the end of an era – just shy of six years I have been plucking away at courses through UBC, first with my post-Bacc in Teacher-Librarianship, and then this program. Honestly, it probably won’t sink in until this January when the next semester begins and I am not juggling the demands of my day job and academia (and supervising the Varsity Boys Volleyball team, haha).

I am thankful for the learning of these years; the ideas and architecture of teaching, learning, and technology that have been made visible through thinkers and makers with skill much greater than my own. At least once a week I am reminded of the meme posted below, and when I pull back from my own unique situation, lenses, and experiences, I continue to be humbled by all of the things I know I don’t know, and even more so the things that I don’t know that I don’t know.

Random internet meme saved on my phone, original source unknown; thought about constantly.

One thing I wish I’d had throughout this degree — and especially in the final projects — was a small community of co-designers. Working full-time while studying part-time often meant creating in isolation or across time zones and distance, without the iterative conversations that sharpen ideas and reveal blind spots. Learning is ecological, and I felt the absence of that ecology at times. For anyone thinking about MET, I strongly encourage it, but especially for you to take the financial hit and take as many summer institute classes as you can. It was these experiences that really got me keen on the importance of hybridity. Even so, the work felt like a reminder that none of us should be designing the futures of education alone.

The more I learned, the more I realized that technology is never neutral — it always arrives embedded with assumptions, values, and power. I came into this program as a tech optimist, but leave it as a tech realist. This kind of program (both the T-L one and the Ed Tech one), would not have been possible to me given my geographic location. Such programs don’t exist here in Manitoba. This program was a gift — but one that came wrapped inside an increasingly precarious technological landscape. However, to borrow from the words of Cory Doctorow, the enshittification of technology has hollowed out some of my previous optimism. When so much of the truly equitizing and democratizing potential of technology is hidden behind paywalls and gradually whittled away even then for increasing add-ons you begin to wonder if the world-views of yourself and the architects of these applications are in alignment.

As a teacher-librarian trying to think carefully about ed tech, I see the role of libraries, both public, elementary/secondary school, and university as a solution to some of these challenges. This will require sustained public funding, and the (mostly) corporations that design these materials to subsidize access to ensure these technologies don’t become yet another divide between the have and have nots. It is not hard to imagine a future where the ability to participate in digital culture depends entirely on one’s ability to pay for the tools required to access it. Perhaps this is no different than inequities within print culture — but it still runs counter to the vision of education I was raised in, both as a student and later as a teacher.

I also hope that the notoriously slow-moving systems of curriculum-development and education are focused on what strikes me as the most crucial adjustment necessary of the LLM age – assessment. Our paradigm has shifted almost instantaneously, and our previous methods of assessing work at its conclusion feel unsuited for the times we are living in. Honestly, it was never suited – but it was easier, and for the students who were able to submit a polished product thanks to the assistance of a tutor, parent, or other support, their ‘knowing’ was never assured in the time before. I hazard to say that it was our biases about who specific learners are, their backgrounds, and what they look like (plus maybe the discomfort in challenging involved families) that made us look the other way. AI didn’t break assessment — it revealed what was already broken.

I used ChatGPT to create these (imperfect) graphs showing the shift to assessment that AI will necessitate. Pull the tab to the right to see how we’ve traditionally thought about product based assessment, and to the left to see my hypothesized AI assessment paradigm. Some day I will just make ones that more accurately represent my thinking (the AI Use label at the dip shouldn’t be there!)

While I worry that AI will mean increasingly high classroom numbers, I actually think that it calls for more teachers, more human mentorship, more conferencing and check-ins along the way – a necessity that is almost impossible in our current set-up. If AI teaches us anything, it’s that students don’t need us less — they need us differently.

What NotebookLM Remediates (and other LLM tools too for that matter)

Imagined as a rousing political speech, with patriotic music slowly swelling in the background.

Colleagues, I know many of you are excited about NotebookLM, especially that uncannily almost-human podcast feature. We upload our readings, videos, and professional documents, then receive instant synthesis supporting multimodality and differentiated instruction. But I want us to consider what’s happening to our professional expertise when we adopt this tool—or any LLM-based assistant. We’re witnessing the remediation of educational expertise itself, transforming teachers from knowledge-holders into knowledge-brokers. NotebookLM stands out for grounding its responses in uploaded materials, lending its outputs an authority that masks their mediation.

To understand what’s at stake, let me introduce a concept from media studies: remediation. NotebookLM remediates the entire research apparatus of teaching—our file cabinets, OneDrive folders, annotated textbooks, and accumulated professional wisdom. Bolter and Grusin (2000) argue that remediation occurs through networks of formal, material, and social practices:

Formally, it remediates the academic literature review, the planning notebook, even Socratic dialogue; but promises “complete and comprehensive access to information” while obscuring the interpretive labor that transforms information into knowledge (Papacharissi, 2015).

Materially, it replaces physical artifacts of teaching expertise (marked-up curriculum guides, annotated student work, scribbles in margins) with algorithmic processes that appear transparent through source citations yet are hidden behind algorithmic choices. NotebookLM produces what Bolter and Grusin (2000) describe as hypermediacy (visible layers of mediation like source links, formats, AI voices) that paradoxically create a sense of immediacy and authority rather than critique.

Socially, it remediates us as expert practitioners. When we upload materials and receive instant analysis, our professional authority shifts from knowing to prompting—a different kind of expertise entirely.

Goodbye Inquiry, Hello Output

Linguist Adam Aleksic (2025) argues that “truly knowing an answer requires struggling with uncertainty.” Consider planning a unit on New France in Canadian history—a unit Manitoba students often struggle to find relevant. Traditionally, this required understanding primary sources, synthesizing across texts, connecting to standards, curating materials, anticipating misconceptions, designing meaningful assessment.

NotebookLM generates all of this in seconds. But as Aleksic describes, “with each additional abstraction from uncertainty, the easier it is to find answers, and the more confident those answers sound.” The tool produces seeming pedagogical expertise with the “aura of truth, objectivity, and accuracy” that danah boyd and Kate Crawford (2012) identify in Big Data mythology.

Yet can we explain why these particular connections matter? In philosophical terms: do we know what NotebookLM claims, or merely believe what it tells us?

The Question Behind the Question

Aleksic describes how “the lost ritual of asking has collapsed the meaning of the question in the first place.” When we can instantly generate unit materials, we never wrestle with fundamental questions: Why teach about New France? What should students understand? How does this connect to their lived experiences?

These aren’t questions NotebookLM can answer. They require what Haraway calls “critical, reflexive relation to our own practices” (as cited in Papacharissi, 2015). The tool can synthesize curriculum documents but cannot interrogate why we chose those documents, what we’re unconsciously prioritizing, or whose perspectives remain absent.

As Aleksic (2025) writes, “figuring out which question to ask is more important than the answer itself.” But NotebookLM’s efficiency makes all questions appear equivalent. We’re “drowning in a sea of answers, forgetting how to ask the right questions.”

Meme depicting teachers choosing 'the unbearable lightness of information' (NotebookLM) over 'the impossible gravitas of knowledge' (traditional pedagogical synthesis)
It is not surprising that we are pulled to these tools – who has the time? Media scholar Zizi Papacharissi calls this tension ‘the unbearable lightness of information vs. the impossible gravitas of knowledge’ – and I feel that in my bones every Sunday night. (This meme was created with imgflip and supplemented with a screenshot of my own use of NotebookLM, plus other art from Canva)

Papacharissi (2015) captures this perfectly: AI outputs “oscillate between the unbearable lightness of information and the impossible gravitas of knowledge.” NotebookLM offers comprehensive information access but cannot deliver genuine pedagogical knowledge; the heavy weight of knowing that emerges only through sustained engagement with uncertainty.

Colleagues, I’m not asking us to abandon NotebookLM, but let’s use it differently. Treat its outputs as another text to interrogate, not authoritative synthesis. Our students need us to model what it means to genuinely know, not merely retrieve.

References

Aleksic, A. (2025, December 3). the importance of not knowing. Substack.com; The Etymology Nerd. https://etymology.substack.com/p/the-importance-of-not-knowing

Bolter, J. D., & Grusin, R. (2000). Remediation : Understanding new media. MIT Press.

Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Papacharissi, Z. (2015). The unbearable lightness of information and the impossible gravitas of knowledge: Big Data and the makings of a digital orality. Media, Culture & Society, 37(7), 1095–1100. https://doi.org/10.1177/0163443715594103

Why I Made Ed Tech Specialists Compare Search Results for My Professional Development Session on New Materialism

As a teacher-librarian, I’m constantly making decisions about which databases to subscribe to, which search tools to recommend, which encyclopedias to point students toward. These decisions often get framed as “neutral” by just providing access to information, offering students “the right resources.” But are they?

This question started nagging at me during IP 2 where I analyzed software encyclopedias through McLuhan’s tetrad and Actor-Network Theory. I decided to test something simple: I searched for two controversial topics across different encyclopedia subscriptions our division provides to students. The results weren’t just different—they were fundamentally different.

I sat there staring at two browser windows, and something clicked: this wasn’t a bug. This was a feature. Each platform was enacting a specific epistemology, a particular idea of what knowledge is. And my choice (as a librarian, and as someone who shapes student access to information) wasn’t neutral at all. I was choosing between worlds, while selling the guise of neutrality.

Why start with search?

When it came time to get to brass tacks on this assignment, I knew I needed an entry point that was practical. Not abstract. Not Barad discussing quantum entanglement — even though it’s fascinating.

Because it seems to me like if we want people to think outside of the box, we need them to realize that the tools they hardly think of as technological have been quietly organizing how knowledge appears to us for a very long time. They’re quietly working in the background; and their output exposes what’s going on behind the scenes. I think this is what makes them a great place to start unpacking the complexity of ideas behind new materialism.

How the elements came together

So I designed a professional learning activity: choose a heated topic, search it in three different tools (Google, Wikipedia, TikTok), and compare what appears. Then unpack: How does each tool assemble knowledge?

I think the session would ultimately take about two hours to work through with a group, but could probably be done in an hour and a half. I have embedded audio files into the presentation with my speakers notes, but have also linked them here if you would rather read them. My presentation slides are directly below.

If you’re an educational technology specialist, a teacher, an administrator or if you make decisions about which tools students use, which platforms teachers adopt, which systems organize learning, I’m inviting you to do something simple:

Pick a controversial topic. Search it in three different places. Compare what appears.

Then ask: What differences did this technology create?

It’s not a complicated activity. But I think it’s a critical and worthwhile one.

Because once you see how Google, Wikipedia, and TikTok assemble knowledge differently, you can’t unsee it. And that’s where the real work begins.

Not in finding the “right” tool. Not in establishing “best practices.” But in developing the literacy to read how tools shape what we can know, and the responsibility to choose—and keep questioning our choices—accordingly.

Assignment 2 – Part 2 – Reflection Time

AI Essentials for Educators; one step closer to reality

I love a good digital story—that’s the teacher‑librarian in me. So when Part II called for one, I went big. I split the module into three chunks: a gentle, non‑academic primer on LLM limitations; Shannon Vallor’s short talk on AI as a mirror; a practical on‑ramp via a choose‑your‑own‑adventure (CYOA) story; and then some hands on AI tool interaction. My audience (Senior Years teachers new to gen‑AI) was not going to read Crawford’s Atlas of AI or Noble’s Algorithms of Oppression, so I let Clippee do the ranting instead.

I actually started in Twine, but Edsby doesn’t play nicely with Twine embeds. Google Sites would have worked, but I wanted to model inside the LMS they already use. So I pivoted to Canva. That constraint forced me to prune eight branches down to four core scenarios. I think this was ultimately a blessing, because it sharpened the key myths I needed teachers to bump into. Guided by Bruner’s claim that “practice in discovering for oneself” makes knowledge more usable in problem solving (Bruner, 1961), the branching story lets teachers feel the pitfalls before I name them. In Bruner’s “hypothetical mode,” learners aren’t “bench‑bound listeners” but co‑constructors; every click in the story and every prompt revision in the lab puts them in that role.

Multimodality mattered too. The New London Group’s push for multiliteracies (1996) and UDL principles nudged me to balance text, images, and short audio clips. I recorded voices in CapCut (yes, shameless self‑promotion—I want invites to co‑teach CYOA projects). Vallor’s mirror metaphor (2024) shaped Clippee’s tone: he “magnifies” what the AI quietly distorted, echoing Crawford’s critique of data extraction and Noble’s warnings about encoded bias. But in a much more accessible way.

Try out Teacher’s AI Adventure!

Within the walls of my module, H5P’s paywall (thanks, D2L) pushed me to CurrikiStudio for the formative checks. That choice wasn’t just budget—Curriki is something teachers can actually replicate in their own Edsby pages tomorrow, without approvals or fees. Peer interaction is Edsby’s Achilles’ heel, so I farmed discussion to Padlet. It’s clunky to add another tool, but the final course task also lives on Padlet, so repeated exposure helps. I even seeded sample posts so no one stares at a blank board.

Overall, this module blends hands‑on pragmatism with just enough theory for what my audience needs: useful, not heavy.

References

Bruner, J. S. (1961). The act of discovery. Harvard Educational Review, 31(1), 21–32.
Crawford, K. (2020). Atlas of AI. Yale University Press.
New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92.
Noble, S. U. (2018). Algorithms of oppression. NYU Press.
Vallor, S. (2024). AI is a mirror of humanity [Video]. Institute of Art and Ideas.

Using AI Text Levelling Tools

a differentiation solution?

Condensing text has always struck me as one of gen-AI’s genuine strengths—especially with passages only a page or two long. Because colleagues and I constantly wrestle with teaching complex ideas to readers at wildly different levels, I decided to run a little experiment.

I grabbed a section from an open Canadian-history textbook on Winnipeg’s water supply and its century-long impact on Shoal Lake First Nation. (Copyright dodged!) Then I sent the same passage through two “grade-five level” text-levelling tools. After the fun I had last week coding responses (sadly I am not being sarcastic) I did a bit of the same here. The results were fascinating. My hunch is that these tools perform better in tightly structured subjects like science or math, but I wanted to see how they’d handle a topic that matters deeply in Winnipeg and which structures of power and colonial legacy have significant impact.

In a perfect world you’d use an AI system that lets you spell out the key concepts that must survive the rewrite, but that raises the stakes for prompt quality. For this assignment I stuck with true paste-and-go tools—the kind that lure in brand-new or still-skeptical AI users.

I’ve bundled my heuristic, the side-by-side outputs, and a brief analysis in a Genially presentation (link below). Make sure to use the show interactive elements button in the top right corner, so that you don’t miss any interactive content. I’d love to hear your thoughts.

What do LLMs tell me to worry about?

And what can I figure out from what it doesn’t say?

I went a bit overboard.

I started looking at two LLMs and then I just kept on adding one more to the list and then I ended up with a 20+ minute video and hours worth of unused footage and a look at how Meta AI, ChatGPT (v. o3), Deepseek, and Copilot handle the same question.

Fun Fact: I used the AI features in CapCut for the emoji captions!

Regardless of my overkill, it was fun. I’ve attached a couple of extra things aside from the video itself.

  1. A link so that you can check out my original prompts, and the codes that I gave to them for my analysis
  2. An interactive couple of graphs that I made in Canva so that you can see some of the data I pulled from my analysis. The charts are interactive, so click around a bit -the labels in the white menu bar under the titles allow you to see one set of information at a time.

I have to say, I’m tempted to strip the model names from the responses and my excel sheet with the records and upload it into Chat and Deepseek to see what they notice. Should I do it?

References

Coleman, B. (2021). Technology of The Surround. Catalyst: Feminism, Theory, Technoscience, 7(2), 1–21. 

Crawford, K. (2021). Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. Yale University Press. 

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.18574/9781479833641

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2) https://doi.org/10.1177/20539517231206794