Designing a Learning Environment

without the features you want; a struggle story

Hey readers! I’ve just finished my first stab at Assignment 2 in my Learning Technologies: Selection, Design, and Application course. It has been a learning experience full of ups and downs. But I see something of utility shaping up. You can get a login link to my course sandbox on our course Canvas page (sorry internet lurkers, this one isn’t for you).

Platform Choice 

I built the course in Edsby, our division’s Learning Management System. I knew this decision would impose limits—Edsby lacks several features common in other LMSs—but I welcomed the challenge of adapting those features within a familiar environment. Also, it’s sort of fun to find ways to work around limitations and problems 🙂 

Audience Lens, Hidden Curriculum & Assessment (oh my!)

Designing the course as a divisional certificate PD offered a double benefit. Many teachers have never seen Edsby “from the student side,” so completing the course lets them experience its interface firsthand. That perspective shift—alongside activities such as embedded Padlets, polls, and streamlined content panels—forms a hidden curriculum in which participants learn both about large language models (LLMs) and about effective Edsby design. Knowing my audience includes colleagues who describe themselves as “not tech-savvy,” I recorded short, captioned tutorials for every unfamiliar action—changing a Padlet display name, uploading a file, finding copilot, etc.—so nobody is left guessing. 

Assessment is intentionally lightweight but still purposeful. Every required Padlet activity and the final AI-analysis assignment is marked on a single pass/fail checklist: if all criteria are met the first time, the task is marked Complete; if anything is missing, I’ll return a brief note—usually within 48 hours—pinpointing what needs to be added or clarified. This approach models formative, mastery-oriented assessment, keeps marking manageable for me, and gives even tech-skeptical colleagues multiple low-stakes chances to succeed.

Challenges & Pivots 

Problems surfaced quickly: Professional Development Groups in Edsby accept assignment submissions, yet those submissions vanish because PD groups aren’t linked to a gradebook. I pivoted to a student-course framework for this prototype and plan to share it as proof of concept for divisional staff learning. 

Edsby’s main feed clutters fast and lacks threaded discussion, so I outsourced dialogue to Padlet. This aligns with Chickering & Ehrmann’s (1996) call for active learning and Bates’s (2015) three interaction types (learner–content, learner–teacher, learner–learner). The workaround—email notifications for every Padlet post—is clunky, but Padlet’s LTI integration could resolve that if I can get my IT to enable it. This is not something that will happen during the time we are in the course, but would be a great feature for other teachers in the future. 

By confronting Edsby’s constraints head-on—and documenting practical pivots—I aim to model the same critical, creative mindset toward technology that the course has encouraged us to embrace so far. 

References

Bates, T. (2015, April 5). Chapter 8: Choosing and using media in education: the SECTIONS model – Teaching in a Digital Age. Opentextbc.ca. https://opentextbc.ca/teachinginadigitalage/part/9-pedagogical-differences-between-media/

Chickering, A., & Gamson, Z. (2001). Implementing the seven principles of good practice in undergraduate education: Technology as lever. Accounting Education News, Journal, Electronic. https://go.exlibris.link/N0tYMtWd

Using AI Text Levelling Tools

a differentiation solution?

Condensing text has always struck me as one of gen-AI’s genuine strengths—especially with passages only a page or two long. Because colleagues and I constantly wrestle with teaching complex ideas to readers at wildly different levels, I decided to run a little experiment.

I grabbed a section from an open Canadian-history textbook on Winnipeg’s water supply and its century-long impact on Shoal Lake First Nation. (Copyright dodged!) Then I sent the same passage through two “grade-five level” text-levelling tools. After the fun I had last week coding responses (sadly I am not being sarcastic) I did a bit of the same here. The results were fascinating. My hunch is that these tools perform better in tightly structured subjects like science or math, but I wanted to see how they’d handle a topic that matters deeply in Winnipeg and which structures of power and colonial legacy have significant impact.

In a perfect world you’d use an AI system that lets you spell out the key concepts that must survive the rewrite, but that raises the stakes for prompt quality. For this assignment I stuck with true paste-and-go tools—the kind that lure in brand-new or still-skeptical AI users.

I’ve bundled my heuristic, the side-by-side outputs, and a brief analysis in a Genially presentation (link below). Make sure to use the show interactive elements button in the top right corner, so that you don’t miss any interactive content. I’d love to hear your thoughts.

What do LLMs tell me to worry about?

And what can I figure out from what it doesn’t say?

I went a bit overboard.

I started looking at two LLMs and then I just kept on adding one more to the list and then I ended up with a 20+ minute video and hours worth of unused footage and a look at how Meta AI, ChatGPT (v. o3), Deepseek, and Copilot handle the same question.

Fun Fact: I used the AI features in CapCut for the emoji captions!

Regardless of my overkill, it was fun. I’ve attached a couple of extra things aside from the video itself.

  1. A link so that you can check out my original prompts, and the codes that I gave to them for my analysis
  2. An interactive couple of graphs that I made in Canva so that you can see some of the data I pulled from my analysis. The charts are interactive, so click around a bit -the labels in the white menu bar under the titles allow you to see one set of information at a time.

I have to say, I’m tempted to strip the model names from the responses and my excel sheet with the records and upload it into Chat and Deepseek to see what they notice. Should I do it?

References

Coleman, B. (2021). Technology of The Surround. Catalyst: Feminism, Theory, Technoscience, 7(2), 1–21. 

Crawford, K. (2021). Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. Yale University Press. 

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.18574/9781479833641

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2) https://doi.org/10.1177/20539517231206794 

Learning Environment Analysis

The Learning Environment evaluation rubric was an interesting assignment for me, as I joined a group focused on post-secondary education, despite all of my teaching experience being at the middle and high school levels. Specifically choosing Canadian Memorial Chiropractic College (CMCC) as our organization provided an excellent opportunity to explore how technology could be leveraged in a program that relies heavily on in-person and hands-on practicum. As I joked in one of our meetings this week—I don’t think I would be willing to go to a chiropractor who was trained only virtually! As such, it became clear that the platform we recommended needed to complement, not replace, face-to-face and practical training.

I had a lot of fun collaborating to develop the rubric for this assignment and weaving together elements from both the SECTIONS and CITE models to create a more holistic overview – what we have entitled the LEARNERS Institutional Needs Assessment Scale and the LEARNERS Learning Tool Assessment Calculator. While the SECTIONS model offers a clear lens for classroom integration, the CITE framework (aimed at global development) brings in valuable perspectives around equity and community benefit—something I believe should be considered in a Canadian context as well. That said, the CITE model can be difficult to navigate, which led us to focus on identifying overlaps and building something new that worked for our scenario. You can see the Needs Assessment Scale here, and the Assessment Calculator here.

One key realization for me during this process was the difference between equity and accessibility in evaluating a technology’s appropriateness. Coming from a public school background, I often prioritize equitable access across diverse devices and connectivity levels. However, in the context of CMCC, with a smaller and more homogenous student body, these concerns were not as high on the institutional priority list. This highlighted how institutional context truly shapes which values are seen as essential—and which are optional.

This project also gave me the opportunity to explore two LMS platforms I hadn’t previously encountered: Docebo and Google Classroom. Docebo, which is used largely in corporate settings, did not sit well with me. Its marketing—“There is no reason we can’t quadruple revenue in the next two years… Docebo has allowed us to create an education engine that’s very plug-and-play and very scalable” (Docebo, 2025)—left me wondering whether education was being reduced to a one-size-fits-all revenue model. That’s obviously beyond the scope of our rubric, but it left a lasting impression (and not a good one). That being said, it offered almost all of the bells and whistles you could be looking for 🙂

Google Classroom is a more familiar and affordable option, but I worry that its low cost is being subsidized through user data collection. The recent bankruptcy of 23andMe (Allyn, 2024), and the concerns about what might happen to user data post-collapse, made me reflect on the fragility of digital trust. While Google Classroom receives a passing grade from Common Sense Media, even their evaluation notes several red flags around data use.

The following two images (Common Sense Media, 2022) show concerns re: data in the Google Classroom ecosystem.

By the end of this assignment, I found myself increasingly skeptical that a truly ethical, learner-centered LMS exists. This exercise sharpened my ability to evaluate tools critically—but it also reinforced my concerns about the broader systems behind them.

References

Allyn, B. (2024, October 3). 23andMe is on the brink. What happens to all its DNA data? NPR. https://www.npr.org/2024/10/03/g-s1-25795/23andme-data-genetic-dna-privacy

Bates, A. W. (2015). Teaching in a digital age. In opentextbc.ca. Tony Bates Associates Ltd. https://opentextbc.ca/teachinginadigitalage

Common Sense Media. (2022, December 19). Common Sense Privacy Evaluation for Google Classroom. Privacy.commonsense.org. https://privacy.commonsense.org/evaluation/Google-Classroom

Docebo. (2025, April 21). The LMS for education. https://www.docebo.com/solutions/education/

Osterweil, S., Shaw, P., Allen, S., Groff, J., Kodidala, S. P., & Schoenfeld, I. (2015). A framework for evaluating appropriateness of education technology use in global development programs. https://dspace.mit.edu/bitstream/handle/1721.1/115340/Summary%20Report_A%20Framework%20for%20Evaluating%20Appropriateness%20of%20Educational%20Technology%20Use%20in%20Global%20Development%20Programs.pdf

Here’s where I want to go with this

A vision for what may come out of ETEC 524

Guten tag meine Leser! 

(Or to those of you not currently obsessed with working your way through the Duolingo German course—good day my readers.) 

For those of you new to my blog, willkommen! I’m Morgan, a secondary school teacher-librarian and current student in the Masters of Educational Technology program through the University of British Columbia. I’m just starting ETEC 524, Learning Technologies: Selection, Design and Application, and this seems like the perfect excuse to dust off my poor, neglected blog. If you scroll through past posts, you’ll get a sense of my background—but here’s the Coles notes version. 

This is my fifteenth-year teaching in the public school system in Manitoba, mostly at the middle and high school levels. On paper, I think I was supposed to be a history teacher, but I’ve done a little bit of everything—core classrooms and upper-middle humanities. Seven years ago, I was asked to move into a teacher-librarian role, and I haven’t looked back since. As this blog shows, this is my second program at UBC; my first was the LIBE Diploma, which gave me excellent training in running a well-rounded library program. Librarianing is the best. I get to buy books, collaborate with teachers, curate across multimodalities, nag people about copyright (not gonna lie, my least favourite part), and help guide future-focused pedagogy. I considered a Masters in Library Studies but felt that this program better fit my interests, the needs of our space, and where I see the future of libraries heading. 

For this course, I’m interested in bridging the gap between healthy communities and the overwhelming amount of digital content at our fingertips. How do I help students not just find information, but apply it to their own lives? Moving between in-person and virtual spaces is part of daily life, but how do we make that shift feel practical for learners? Maybe it’s the creep of middle age making me critical, but many students seem increasingly disillusioned with school. How do we build learning environments where students critically engage with tech beyond academic checkboxes? And how do I ensure I’m using technology for true redefinition (Puentedura, 2009) rather than using resource-heavy tools for tasks that could just as easily be done on paper? As a librarian, I see the aftermath of a lot of poorly planned tech investments, and I don’t want what I design to add to the mess. 

Best golden grill, best fluffy texture, best unusual fillings. One could learn much, mastering the perfect pancake.

What I hope to develop is a course where students choose a demonstrable skill—something they truly want to learn—and build it over a semester. They would set goals, manage their time, reflect on their progress, tackle challenges, and share their learning with others. For example, I might choose to learn how to make the perfect pancake (a worthy pursuit, in my opinion). I’d network with cooks, test recipes, reflect on my process, and document what I learn so I could share it with others. The course would wrap up with a community celebration where students showcase their skills. It’s still just the glimmer of an idea, but I’m hopeful this class will help me turn it into something practical and worth running. 

The challenge, of course, is designing something meaningful and manageable when students will pick skills I know nothing about—and that’s kind of the point. I won’t be the expert, but I can build structures to help them find reliable sources, network and connect with experts, and reflect on their learning. That’s where I hope this course will help me grow—giving me the tools to better select and apply technologies that support diverse, self-directed learning without turning the course into a chaotic free-for-all. 

This course feels like the right fit to help me move that idea forward. The frameworks we’ll explore—like SAMR and SECTIONS (Bates, 2014)—can help me evaluate whether my design choices are meaningful or just adding extra steps. The focus on learning environments, interaction, and engagement will help me balance student independence with community-building. The work on assessment will push me to clarify what success looks like when every student is learning something different. Later modules on content creation, multimodal presentation, and communication will give me practical tools to support students in sharing their learning in ways that go beyond the traditional slideshow or essay. The final assignments are perfectly timed to help me produce both a structured unit and a tech integration proposal—directly aligned with my course concept. 

In short, I hope this course will help me move from intention to implementation—grounding my ideas in research-backed frameworks and best practices, and giving me peer and instructor feedback on my course design. Specifically, I hope to strengthen my ability to design learning environments that foster student agency, apply digital tools purposefully, develop process-based assessment strategies, and support students in sharing their learning in meaningful ways. 

To do this, I’ll need access to examples of blended learning structures, readings on assessment for self-directed learning, and opportunities to experiment with digital tools for documenting learning. I also hope to learn from my peers—many of whom bring different teaching contexts and insights that could help me refine my thinking. 

By the end of our time together, I know this course will help me take a meaningful step forward in becoming a digital-age teaching professional—someone who not only navigates the evolving world of educational technology but helps students do the same, critically, creatively, and ethically. 

References

Bates, A. W. (2015). Teaching in a digital age. In opentextbc.ca. Tony Bates Associates Ltd. https://opentextbc.ca/teachinginadigitalage

PowerSchool. (2021, April 13). SAMR Model: A Practical Guide for K-12 Classroom Technology Integration. https://www.powerschool.com/blog/samr-model-a-practical-guide-for-k-12-classroom-technology-integration

‘Artificial’ Intelligence

Whose artificial intelligence is it anyway?

I went a bit overboard this week, something I may not be able to sustain long term – but I had a lot of fun putting this together! The animated video and clipart are courtesy of Adobe Express.

It should be known that the astronaut is just a preset character in the animate from audio function in Adobe Express, but how serendipitous. Little guy looks a lot like me!

References

BBC News. (2016, January 26). AI pioneer Marvin Minsky dies aged 88. BBC News. https://www.bbc.com/news/technology-35409119

Biography.com Editors. (2020, July 22). Alan Turing . Biography. https://www.biography.com/scientists/alan-turing

Buolamwini, J. (2019, February 7). Artificial Intelligence has a problem with gender and racial bias. Here’s how to solve it. Time. https://time.com/5520558/artificial-intelligence-racial-gender-bias/

Chollet, F. (2019). The measure of intelligence. ArXiv. https://arxiv.org/abs/1911.01547

Donovan, P. (n.d.). Herbert Simon: Father of Artificial Intelligence. UBS Nobel Perspectives. Retrieved January 24, 2024, from https://www.ubs.com/microsites/nobel-perspectives/en/laureates/herbert-simon.html

Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review. https://technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru

Harris, A. (2018, November 1). Human languages vs. programming languages. Medium. https://medium.com/@anaharris/human-languages-vs-programming-languages-c89410f13252

Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. Vox. https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

McCarthy, J. (2019). What is AI? / Basic Questions. Stanford.edu. http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html

OpenAI. (2024). ChatGPT (GPT-4 version) [Large language model]. https://chat.openai.com/chat

OpenAI. (2024). DALL-E (Version 3) [Large language model]. https://labs.openai.com

Sutton, R. S. (2020). John McCarthy’s definition of intelligence. Journal of Artificial General Intelligence, 11(2), 66–67. https://doi.org/10.2478/jagi-2020-0003

Wikipedia Contributors. (2019, October 15). John McCarthy (computer scientist). Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)

Winston, P. H. (2016). Marvin L. Minsky (1927–2016). Nature, 530(7590), 282–282. https://doi.org/10.1038/530282a

But Can You Use It?

Several old wooden wagon wheels leaning against a concrete wall.

What is usability? 

Perhaps it is simpler to view usability through the lens of a stone age technology. The wheel proliferated because it is infinitely usable. To borrow from Issa and Isaias’ usability criteria (2015, p. 33), the wheel was easily understood and adopted across various cultures (learnability). It could be adapted for use in many places (flexibility). When properly designed, it rarely failed (robustness). The wheel significantly reduced the effort required for transportation (efficiency), and its design was simple, effective, easily reproduced, and impossible to forget how to use (memorability). Small imperfections don’t usually effect its utility (error handling), and it made people’s lives better and easier (satisfaction). When we are designing tools for use, usability must be the end goal, or else we are building Rube Goldberg machines; complex machines that perform tasks in indirect and convoluted ways (Wikimedia Contributors, 2019) that are more of a puzzle and pastime for the designer than solutions to widely held problems or ways to improve quality of life. 

What about educational usability? 

From an educational lens several ideas are missing. Unlike the profit-driven motives of the free market, which often lead to excluding certain users in technology design, the education system prioritizes inclusivity, catering to diverse learning needs, and supporting both teaching and learning processes. When technologies have been designed for commercial usability rather than educational, educational outcomes and learning effectiveness are often sacrificed. Adding features necessary for ensuring support for different age groups, modes of learning, technological proficiency and integration with learning standards are not marketable in the same ways.

Data privacy is another concern. Individuals of legal age can consent to technology use and data terms, but schools must prioritize student data privacy, often facing higher costs for technology that adheres to these standards, unlike commercial tech subsidized through data sale (Canadian Centre for Cyber Security, 2023). Ultimately, one could define educational usability as technology that meets Issa and Isaias’ criteria on top of being ethically responsible and inclusive, which prioritizes the learning process, and recognizes the financial and security challenges that are unique to the educational field. 

When usability studies go wrong 

User-centred design is integral . Woolgar effectively argued that in his observed usability study, he saw the configuration of the user to the technology. This is problematic because it can lead to a notable mismatch between user needs and the technology, resulting in a product that is difficult to use, or doesn’t serve the problem it is intended to address. It also leads to potential user frustration and disengagement because they are forced to adapt to a system that doesn’t align with their natural behaviours, expectations, and motivations.  

One issue that stood out for me was the very close presence of testers during the process. The testers were physically in the space and verbally guided the users along the way, telling users when they could give up, or providing positive reinforcement to encourage useful behaviours like reading a manual (Woolgar, 1990, p. 85). Would users outside of this ecosystem persist in their use of the technology, and where would users genuinely struggle? Configuring the users muddies the water. 

Woolgar highlighted the insider/outsider contrast, with insiders like tech support often surprised by outsiders’ real-world use of technology, exemplified by simplified computer instructions posted in a school computer lab. (Woolgar, 1990, p. 72). The sheer depths of knowledge and experience of designers act as blinders to the everyday needs of average users, who may not share the same level of expertise or perspective. Perhaps a better process for usability testing would have helped create a device that was more intuitive. 

Usability over time 

Woolgar’s points on usability are particularly relevant when considering the DOS-based 286 computers he references in 1990, which, due to their novelty and hardware constraints, necessitated user adaptation and lacked key usability aspects like learnability and satisfaction set out by Issa and Isaias 25 years later. It is likely that in Woolgar’s case users legitimately needed to be configured. Now that technology is ubiquitous, and screen recording technology/keystroke-logs exist we can now take the lessons learned from Woolgar and apply them in a way that helps users configure the tech rather than vice versa. Testers no longer need to physically be in a room with those doing the testing, as we can gather helpful data virtually. In Woolgar’s case, usability studies were an end of process project to be completed shortly before heading to market, whereas Issa and Isaias frame usability evaluation as a recursive process of prototype releases; this is a significantly more proactive, user-centric approach. Ultimately, when read together these pieces highlight how important it is that our conceptions of usability do not remain static. 

References 

Canadian Centre for Cyber Security. (2023, August 4). Protecting your information and data when using applications- ITSAP.40.200. Canadian Centre for Cyber Security. https://www.cyber.gc.ca/en/protecting-your-information-and-data-when-using-applications-itsap40200 

Goldberg, R. (1931). Self-operating napkin [Comic]. In Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commons/a/a9/Rube_Goldberg%27s_%22Self-Operating_Napkin%22_%28cropped%29.gif 

Issa, T., & Isaias, P. (2015). Usability and Human Computer Interaction (HCI). In Sustainable Design (pp. 19–36). https://doi.org/10.1007/978-1-4471-6753-2_2 

Wikipedia Contributors. (2019, April 10). Rube Goldberg machine. Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/Rube_Goldberg_machine 

Woolgar, S. (1990). Configuring the user: The case of usability trials. The Sociological Review, 38(1_suppl), 58–99. https://doi.org/10.1111/j.1467-954x.1990.tb03349.x