So a few highlights from the past month as I continue to refine things. And really, it’s a lot of refinement and incremental progress. Much of the “big” work is done and now it’s usability, it’s — well, I was going to use perfecting, but nothing is perfect yet!
- Mathematics on the web. This was a big struggle and remains one. It’s always been a pain to have math posted well on the web. Typing it in by the user I’m keeping to just keyboard options. I have neither the time nor see it as a priority to have an equation editor. chatOAME is not mean to be an equation solver and most of our notation is relatively straighforward K-12. Okay, some rational expressions can be a pain to enter, but again, we’re not here to solve equations. There are better tools for that. Now, for output. There are only a few options available and I went through them 🙂 It took me about a week to get one of them, MathJax – a form of LaTEX, to work somewhat consistently. One of the problems is that OpenAI Assistant don’t always listen to instructions, and even though I have prioritized always posting responses in MathJax, it doesn’t always listen.
- Mathematics on the web into a document. So, when you pass MathJax to a Word document, it doesn’t 🙂 You can go through the steps of letting Equation Editor know that each equation is in LaTEX but if there are a lot of equations in your document, that can get annoying. Another problem to solve.
Math Expectations. So I brought in the Ontario Expectations so that when it generates tasks, or if you want to reference it in your discussion you have access to them easily. So that’s all great and it works well EXCEPT I have to go through the Expectations and write out what the limitations are based on Ontario curricula — the part that is just known by Ontario Teachers. For example, in Grade 10, we don’t solve Quadratics with imaginary roots. In Grade 11, we solve exponential equations using guess-and-check not logarithms. Calculus? We don’t do related rates. So the expectations tell you a baseline but there’s no upper bound and if the teacher isn’t aware, the AI goes off doing (admittedly) good mathematics but overwhelms the situation. I’m sure the situation will be worse in elementary questions (which I haven’t had a start at yet).
What’s in a Task? So when you ask for a Worksheet, what’s on a worksheet? Is it just a repetitive series of similar questions building in difficulty? My argument is No (although most worksheet-generation software would disagree). So when I’ve been working on the chatOAME Tasks site, I’ve been trying to define what each task is using some of the better research articles on those objects. So when I was looking at what made the best Simulation Activity, I pull a series of research and PD articles on that area, tossed them in ChatGPT and asked for what the themes were, and then created a task design protocol based on those themes
- Word Problems. Okay, this one I’m going to call a success. I went through the design phase as outlined above, but I wanted to make sure that this was clearly an OAME-aligned product. So I added in that the context had to be Canadian in some way (history, geography, business, arts, sports, literature, etc) or FNMI-related (history, culture, arts, sports, etc). The former was interesting — it introduced me to the Vancouver Island marmot, an endangered species, tracking the Arctic Fox with a dying GPS battery and it also developed a really cool problem on the CN Tower. The FNMI approach, though, was a challenge for me, as it’s not my area of expertise but thanks to a generous colleague who is, I was able to craft a method that isn’t just swapping in an indigenous name into a problem. It’s written authentic problems on the growth in value of Kenojuak Ashevak’s artwork, First Nations’ entrepreneurship in Winnipeg, Haida construction of totem poles and it really tried to do something on Anishinaabec canoes and vectors.
- Word Problems, almost. So the above was a success in that it generates some great word problems that are well situated in Canadian and FNMI contexts. It is not, though, always successful in full execution. The CN Tower on, well, it got overly challenging too fast and went into piecewise functions. The canoe/vectors problem? It started off really cool involving art and design but then flipped halfway through and the raven-design-element was now a real raven flying on a route in a vector problem. Now, that’s going to happen, but (a) I have to create guardrails for expectations (see#3) and (2) teachers cannot just blindly hand out generated tasks to students without actually reading both the problem and solution themselves. (I say this as someone who has in the past blindly handed out a problem to students… doctor heal thyself!). Yes, chatOAME is required to provide a solution to every question it creates, but it doesn’t always listen to its rules. I described AI like a Grade 9 student: it occasionally listens to instructions but then only through its own interpretation.
Okay, that’s likely enough for now. I write mostly to reflect and document but if folks are taking the time to read through this, you needn’t experience a marathon 🙂 If you are an Ontario Math Teacher interested in working with this development project, let me know!
Thanks for taking the time to experiment with this! Down in Illinois here but also curious as to the potential.