College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    132
    arrow-down
    7
    ·
    11 months ago

    Prof here - take a look at it from our side.

    Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

    I am not arguing exams are perfect mind, but I’d rather doubt a few student’s inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).

    Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.

    • AeroLemming@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      11 months ago

      When I was in school, exams were taken on proctored devices that were locked down by the IT team to the point where you couldn’t even close the test software to look at something else. That’s not any less secure than having to hand-write your answers, and while it may have been expensive, I can’t imagine it being more expensive than missing out on automatic organization and/or grading for certain kinds of tests in the long run.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 months ago

        So, we’re working on a study for online Vs in person.

        We’ve noticed that stiict time limits alone tend to shift grades. A locked down browser sounds great, but anyone can search using their phone, so proctoring is a must (but also time consuming to check) if you want to get the intended effect.

        As for online grading, it’s a mixed bag. With a very strict rubric, gradescope can save a lot of time, but otherwise it takes a lot longer. MCQs and single number answers can be auto-matked, but they’re awful at assessing ability and should be avoided. Overall, grading online costs more than it saves, and tends to give much more rigid feedback to students.

        • AeroLemming@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Thanks for the thoughtful response. Followup: if all students are facing the same direction, couldn’t you just set up a camera behind and above them to see if they’re hiding a phone behind the monitor? Hiding it under their desk can happen with or without a computer. If students are told that you’re doing that, even if you only watch the camera feed during the test and don’t pay that much attention, the evidence of them cheating being on a recording to be checked at any time would be a very powerful deterrent.

          • HexesofVexes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            You could indeed go for such a setup, however, in a room with 50+ students it becomes very hard to angle a camera with a clear view on all of them, their computer screens, and under their desks. It’s easier just to walk around the room to invigilate. However, I might have a read up on this as it might be an option for students with exam anxiety (I realise we look scary walking around the exam room!).

            • AeroLemming@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              That’s true, but the students won’t necessarily know that. There’s gotta be some solution that doesn’t send us back to the 20th century!

    • maegul@lemmy.ml
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      23
      ·
      11 months ago

      Here’s a somewhat tangential counter, which I think some of the other replies are trying to touch on … why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

      In a world where something like AI can come up and change the landscape in a matter of a year or two … how much value is left in the idea of assessing people’s value through exams (and to be clear, I’m saying this as someone who’s done very well in exams in the past)?

      This isn’t to say that knowing things is bad or making sure people meet standards is bad etc. But rather, to question whether exams are fit for purpose as means of measuring what matters in a world where what’s relevant, valuable or even accurate can change pretty quickly compared to the timelines of ones life or education. Not long ago we were told that we won’t have calculators with us everywhere, and now we could have calculators embedded in our ears if wanted to. Analogously, learning and examination is probably being premised on the notion that we won’t be able to look things up all the time … when, as current AI, amongst other things, suggests, that won’t be true either.

      An exam assessment structure naturally leans toward memorisation and being drilled in a relatively narrow band of problem solving techniques,1 which are, IME, often crammed prior to the exam and often forgotten quite severely pretty soon afterward. So even presuming that things that students know during the exam are valuable, it is questionable whether the measurement of value provided by the exam is actually valuable. And once the value of that information is brought into question … you have to ask … what are we doing here?

      Which isn’t to say that there’s no value created in doing coursework and cramming for exams. Instead, given that a computer can now so easily augment our ability to do this assessment, you have to ask what education is for and whether it can become something better than what it is given what are supposed to be the generally lofty goals of education.

      In reality, I suspect (as many others do) that the core value of the assessment system is to simply provide a filter. It’s not so much what you’re being assessed on as much as your ability to pass the assessment that matters, in order to filter for a base level of ability for whatever professional activity the degree will lead to. Maybe there are better ways of doing this that aren’t so masked by other somewhat disingenuous goals?

      Beyond that there’s a raft of things the education system could emphasise more than exam based assessment. Long form problem solving and learning. Understanding things or concepts as deeply as possible and creatively exploring the problem space and its applications. Actually learn the actual scientific method in practice. Core and deep concepts, both in theory and application, rather than specific facts. Breadth over depth, in general. Actual civics and knowledge required to be a functioning member of the electorate.

      All of which are hard to assess, of course, which is really the main point of pushing back against your comment … maybe we’re approaching the point where the cost-benefit equation for practicable assessment is being tipped.


      1. In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.
      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        71
        arrow-down
        5
        ·
        11 months ago

        Ah the calculator fallacy; hello my old friend.

        So, a calculator is a great shortcut, but it’s useless for most mathematics (i.e. proof!). A lot of people assume that having a calculator means they do not need to learn mathematics - a lot of people are dead wrong!

        In terms of exams being about memory, I run mine open book (i.e. students can take pre-prepped notes in). Did you know, some students still cram and forget right after the exams? Do you know, they forget even faster for courseworks?

        Your argument is a good one, but let’s take it further - let’s rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.

        Rather than moving us forward, such a system would lock us into place and likely create out of date workers.

        At the end of the day, AI is a great tool, but so is a hammer and (like AI today), it was a good tool for solving many of the problems of its time. However, I wouldn’t want to only learn how to use a hammer, otherwise how would I be replying to you right now?!?

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          28
          ·
          11 months ago

          So … I honestly think this is a problematic reply … I think you’re being defensive (and consequently maybe illogical), and, honestly, that would be the red flag I’d look for to indicate that there’s something rotten in academia. Otherwise, there might be a bit of a disconnect here … thoughts:

          • The calculator was in reference to arithmetic and other basic operations and calculations using them … not higher level (or actual) mathematics. I think that was pretty clear and I don’t think there’s any “fallacy” here, like at all.
          • The value of learning (actual) mathematics is pretty obvious I’d say … and was pretty much stated in my post about alternatives to emphasise. On which, getting back to my essential point … how would one best learn and be assessed on their ability to construct proofs in mathematics? Are timed open book exams (and studying in preparation for them) really the best we’ve got!?
          • Still forgetting with open book exams … seems like an obvious outcome as the in-exam materials de-emphasise memory … they probably never knew the things you claim they forget in the first place. Why, because the exam only requires the students to be able to regurgitate in the exam, which is the essential problem, and for which in-exam materials are a perfect assistant. Really not sure what the relevance of this point is.
          • Forgetting after coursework … how do you know this (genuinely curious)? Even so, course work isn’t the great opposite to exams. Under the time crunch of university, they are also often crammed, just not in an examination hall. The alternative forms of education/assessment I’m talking about are much more long-form and exploration and depth focused. The most I’ve ever remembered from a single semester subject came from when I was allowed to pursue a single project for the whole subject. Also, I didn’t mention ordinary general coursework in my post, as, again, it’s pretty much the same paradigm of education as exams, just done at home for the most part.
          • Rebuilding education toward employer centric training system … I … ummm … never suggested this … I suggested the opposite … only things that were far more “academic” than this and were never geared toward “productivity”. This is a pretty bad staw man argument for a professor to be making, especially given that it seems constructed to conclude that the academy and higher learning are essential for the future success of the economy (which I don’t disagree with or even question in my post).
          • You speak about AI a lot. Maybe your whole reply was solely to the whole calculator point I made. This, I think, misses the broader point, which most of my post was dedicated to. That is, this isn’t about us now needing to use AI in education (I personally don’t buy that at all for probably much of the same reason you’d push back on it). Instead, it’s about what it means about our education system that AI can kinda do the thing we’re using to assess ourselves … on which I say, it tells us that the value of assessment system we take pretty seriously ought to be questioned, especially, as I think we both agree on, given the many incredibly valuable things education has to offer the individual and society at large. In my case, I go further and say that the assessment system is and has already detracted from these potential offerings, and that it does not bode well for modern western society that it seems to be leaning into the assessment system rather than expanding its scope.
          • Landrin201@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            11 months ago

            OK Mr Socrates how else would you assess whether a student has learned something?

            • maegul@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Ha … well if I had answers I probably wouldn’t be here! But seriously, I do think this is a tough topic with lots of tangled threads linked to how our society functions. I’m not sure there are any easy “fixes”, I don’t think anyone who thinks that can really be trusted, and it may very well turn out that I’m completely wrong and there is not “better way”, as something flawed and problematic may just turn out to be what humanity needs.

              A pretty minor example based on the whole thing of returning to paper exams. What happens when you start forcing students to be judged on their ability to do something, alone, where they know very well that they can do better with an AI assistant? Like at a psychological and cultural level? I don’t know, I’m not sure my generation (Xennial) or earlier ever had that. Even with calculators and arithmetic, it was always about laziness or dealing with big numbers that were impossible for (normal humans), or ensuring accuracy. It may not be the case that AI is at that level yet for many exams and students (I really don’t know), but it might be or might be soon. However valuable it is to force students to learn to do the task without the AI, there’s gotta be some broad cultural effect in just ignoring the super useful machine.

              Otherwise, my general ideas would be to emphasise longer form work (which AI is not terribly useful for). Work that requires creativity, thinking, planning, coherent understanding, human-to-human communication and collaboration. So research projects, actual practical work, debates, teaching as a form of assessment etc. In many ways, the idea of “having learned something” becomes just a baseline expectation. Exams, for instance, may still hold lots of value, but not as forms of objective assessment, but as a way of calibrating where you’re up to on the basic requirements to start the real “assessment” and what you still need to work on.

              Also … OK Mr Socrates … is maybe not the most polite way of engaging here … comes off as somewhat aggressive TBH.

      • CapeWearingAeroplane@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        17
        ·
        11 months ago

        I think a central point you’re overlooking is that we have to be able to assess people along the way. Once you get to a certain point in your education you should be able to solve problems that an AI can’t. However, before you get there, we need some way to assess you in solving problems that an AI currently can. That doesn’t mean that what you are assessed on is obsolete. We are testing to see if you have acquired the prerequisites for learning to do the things an AI can’t do.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          11 months ago
          1. AI isn’t as important to this conversation as I seem to have implied. The issue is us, ie humans, and what value we can and should seek from our education. What AI can or can’t do, IMO, only affects vocational aspects in terms of what sorts of things people are going to do “on the job”, and, the broad point I was making in the previous post, which is that AI being able to do well at something we use for assessment is an opportunity or prompt to reassess the value of that form of assessment.
          2. Whether AI can do something or not, I call into question the value of exams as a form of assessment, not assessment itself. There are plenty of other things that could be used for assessment or grading someone’s understanding and achievement.
          3. The real bottom line on this is cost and that we’re a metric driven society. Exams are cheap to run and provide clean numbers. Any more substantial form of assessment, however much they better target more valuable skills or understanding, would be harder to run. But again, I call into question how valuable all of what we’re doing actually is compared to what we could be doing, however more expensive and whether we should really try to focus more on what we humans are good at (and even enjoy).
          • pinkdrunkenelephants@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            3
            ·
            11 months ago

            AI can’t do jack shit with any meaningful accuracy anyway so it’s stupid to compare human education to AI blatantly making shit up like it always does

      • ZzyzxRoad@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        11 months ago

        Here’s a somewhat tangential counter, which I think some of the other replies are trying to touch on … why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

        My theory prof said there would be paper exams next year. Because it’s theory. You need to be able to read an academic paper and know what theoretical basis the authors had for their hypothesis. I’m in liberal arts/humanities. Yes we still exist, and we are the ones that AI can’t replace. If the whole idea is that it pulls from information that’s already available, and a researcher’s job is to develop new theories and ideas and do survey or interview research, then we need humans for that. If I’m trying to become a professor/researcher, using AI to write my theory papers is not doing me or my future students any favors. Ststistical research on the other hand, they already use programs for that and use existing data, so idk. But even then, any AI statistical analysis should be testing a new hypothesis that humans came up with, or a new angle on an existing one.

        So idk how this would affect engineering or tech majors. But for students trying to be psychologists, anthropologists, social workers, professors, then using it for written exams just isn’t going to do them any favors.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          11 months ago

          I also used to be a humanities person. The exam based assessments were IMO the worst. All the subjects assessed without any exams were by far the best. This was before AI BTW.

          If you’re studying theoretical humanities type stuff, why can’t your subjects be assessed without exams? That is, by longer form research projects or essays?

      • dragonflyteaparty@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        As they are talking about writing essays, I would argue the importance of being able to do it lies in being able to analyze a book/article/whatever, make an argument, and defend it. Being able to read and think critically about the subject would also be very important.

        Sure, rote memorization isn’t great, but neither is having to look something up every single time you ever need it because you forgot. There are also many industries in which people do need a large information base as close recall. Learning to do that much later in life sounds very difficult. I’m not saying people should memorize everything, but not having very many facts about that world around you at basic recall doesn’t sound good either.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 months ago

          Learning to do that much later in life sounds very difficult

          That’s an interesting point I probably take for granted.

          Nonetheless, exercising memory is probably something that could be done in a more direct fashion, and therefore probably better, without that concern affecting the way we approach all other forms of education.

      • tony@lemmy.hoyle.me.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        11 months ago

        It’s an interesting point… I do agree memorisation is (and always has been) used as more of a substitute for actual skills. It’s always been a bugbear of mine that people aren’t taught to problem solve, just regurgitate facts, when facts are literally at our fingertips 24/7.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Yea, it isn’t even a new problem. The exam was questionable before AI.

            • SpiderShoeCult@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              11 months ago

              While I do agree with your initial point (that memorization is not really the way to go with education, I’ve hated it for all my life because it was never a true filter - a parrot could pass university level tests if trained well enough), I will answer your first point there and say that yes, it is important to know where Yugoslavia was, because politics was always first and foremost influenced by geography, and not just recent.

              Without discussing the event mentioned itself, some points to consider:

              1. The cultural distribution of people - influenced by geography - people on the same side of the mountain or river are more likely to share the same culture for example. Also were there places easily. Were they lands easily accessible to conquering armies and full of resources? Have some genocide and replacement with colonizers from the empire - and the pockets of ‘natives’ left start harboring animosity towards the new people.

              2. Spheres of influence throughout history - arguably the most important factor - that area of Europe has usually been hammered by its more powerful neighbours, with nations not posessing adequate diplomacy or tactics being absorbed or into or heavily influenced by whatever empire was strongest at the time - Ottoman Empire, USSR, Roman Empire if we want to go that far into history. So I would say hearing ‘Yugoslavia was in South East Europe’ would immediately prompt an almost instinctual question of ‘Oh, what terrible things happened there throughout history, then?’ for one familiar with that area, thereby raising this little tidbit to one of the top facts.

              We could then raise the question of what would have happened to the people had they been somewhere else? History is written by the victors and the nasty bits (like sabotage and propaganda to prevent a certain geographically nation from becoming too powerful) are left out.

              My geopolitics game isn’t that strong but I’m going to go out on a limb here and say that if the Swiss weren’t in the place they are, they would probably not be the way they are (no negative nuance intended). Living in a place that’s hard to invade tends to shape people differently than constantly looking over your shoulder.

              And reading your second point, I’m understanding about what I wrote in this wall of text. Odd.

              • maegul@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                And reading your second point, I’m understanding about what I wrote in this wall of text. Odd.

                Yea … we’re on the same page here (I think). All the things you’re talking about are the important stuff, IMO. “Yugoslavia is in south eastern Europe” doesn’t mean much, even if you can guess something about the relatively obvious implications of that geography, as you say. But those implications come from somewhere, some understanding of some other episode of history. Or it could come form learning about Yugoslavia’s and the Balkan’s history. For instance, you might note from the location this it’s relatively close to Turkey, but that wouldn’t lead you to naturally expect a sizeable Islamic population in the region (well I didn’t at first), unless you really knew the Ottoman history too. So there’s a whole story to learn there of the particular cultural make up of the place and where it comes from and how that leads to cultural tensions come the Yugoslavian wars. In learning about that, you can learn about how far away the Ottoman empire was and where its borders got to over time, where the USSR was and the general ambit of Slavic culture etc. Once you’ve a got a story to tell, those things become naturally important and memorable.

                And now I’ve added my own wall of text … sorry. So … yes! I agree! Both of our walls of texts are (loosely) about the important stuff, with facts sure, but motivated by and situated in history (though there’s obviously a fuzzy line there too!)

                  • maegul@lemmy.ml
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    11 months ago

                    your own inflections on your own post

                    Inflection? I used the present tense of be/is for past events … that’s a tense or conjugation. No inflections occur with irregular verbs like that. Are you not on top of your grammar?

      • Spike@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        11 months ago

        In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.

        You are getting some flak, but imho you are right. The only thing an exam really tests is how well you do in exams. Of course, educators dont want to hear that. But if you take a deep dive into (scientific) literature on the topic, the question “What are we actually measuring here?” is raised rightfully so.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          11 months ago

          Getting flak on social media, through downvotes, can often (though not always!) be a good thing … means you’re touching a nerve or something.

          On this point, I don’t think I’ve got any particularly valuable or novel insights, or even any good solutions … I’m mostly looking for a decent conversation around this issue. Unfortunately, I suspect, when you get everyone to work hard on something and give them prestigious certifications for succeeding at that something, and then do this for generations, it can be pretty hard to convince people to not assign some of their self-worth to the quality/value/meaning of that something and to then dismiss it as less valuable than previously thought. Possibly a factor in this conversation, which I say with empathy.


          Any links to some literature?

          • Spike@feddit.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            Used only papers in german so far, sadly.

            Here is something I found interesting in english:

            Testing the test: Are exams measuring understanding? Brian K. Sato, Cynthia F. C. Hill, S. Lo Biochemistry and Molecular Biology Education

            in general: elicit.org

            really good site.

            • maegul@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              Hadn’t heard of that elicit cite … thanks! How have you found it? It makes sense that it exists already, but I hadn’t really thought about it (haven’t looked up papers recently but may soon).

              Also thanks for the paper!!

              • Spike@feddit.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                Have found it relatively early after it was created, using it for getting a quick overview over papers when writing my own. It is sooo good for that.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        In my experience, they love to give exams where it doesn’t matter what notes you bring, you’re on the same level whether you write down only the essential equations, or you copy down the whole textbook.

    • MNByChoice@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      11 months ago

      Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students

      I get that this is a quick post on social media and only an antidote, but that is interesting. What do you think the connection is? AI, anxiety, or something else?

      • Kage520@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        11 months ago

        That sounds like AI. If you do your homework then even sitting in a regular exam you should score better than 20%. This exam being open book, it sounds like they were unfamiliar with the textbook and could not find answers fast enough.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        11 months ago

        It’s a tough one because I cannot say with 100% certainty that AI is the issue. Anxiety is definitely a possibility in some cases, but not all; perhaps thinking time might be a factor, or even just good old copying and then running the work through a paraphraser. The large amount of absenses also means it was hard to benchmark those students based on class assessment (yes, we are always tracking how you are doing in class, not tp judge you, but just in case you need some extra help!).

        However, AI is a strong contender since the “open book” part didn’t include the textbook, it allowed the students to take a booklet into the exams with their own notes (including fully worked examples). They scored low because they didn’t understand their own notes, and after reviewing the notes they brought in (all word perfect), it was clear they did not understand the subject.

        • joel_feila@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          11 months ago

          Oh an open notes test. Man i never use my notes on those. I try not to use the book on open tests.

          • HexesofVexes@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 months ago

            Curious to know your take on why you avoid using the notes - a couple of my students clearly did this in the final and insights onto why would be welcome!

      • adavis@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Not the previous poster. I taught an introduction to programming unit for a few semesters. The unit was almost entirely portfolio based ie all done in class or at home.

        The unit had two litmus tests under exam like conditions, on paper in class. We’re talking the week 10 test had complexity equal to week 5 or 6. Approximately 15-20% of the cohort failed this test, which if they were up to date with class work effectively proved they cheated. They’d be submitting course work of little 2d games then on paper be unable to “with a loop, print all the odd numbers from 1 to 20”

    • Smacks@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Graduated a year ago, just before this AI craze was a thing.

      I feel there’s a social shift when it comes to education these days. It’s mostly: “do 500 - 1,000 word essay to get 1.5% of your grade”. The education doesn’t matter anymore, the grades do; if you pick something up along the way, great! But it isn’t that much of a priority.

      I think it partially comes from colleges squeezing students of their funds, and indifferent professors who just assign busywork for the sake of it. There are a lot of uncaring professors that just throw tons of work at students, turning them back to the textbook whenever they ask questions.

      However, I don’t doubt a good chunk of students use AI on their work to just get it out of the way. That really sucks and I feel bad for the professors that actually care and put effort into their classes. But, I also feel the majority does it in response to the monotonous grind that a lot of other professors give them.

    • mrspaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      I recently finished my degree, and exam-heavy courses were the bane of my existence. I could sit down with the homework, work out every problem completely with everything documented, and then sit to an exam and suddenly it’s “what’s a fluid? What’s energy? Is this a pencil?”

      The worst example was a course with three exams worth 30% of the grade, attendance 5% and homework 5%. I had to take the course twice; 100% on HW each time, but barely scraped by with a 70.4% after exams on the second attempt. Courses like that took years off my life in stress. :(

        • mrspaz@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Sure; it was Mechanical Engineering. The class was “Vibrations & Controls;” the first half of the course was vibrations / oscillatory systems, and then the second half was theory of feedback & control systems (classic “PID” controllers for the most part). The exams were pencil-and-paper, in-person, time-limited.

          The first attempt we were allowed nothing except the exam and paper for answers; honestly I’m not sure what that professor was expecting.

          In my second attempt the professor provided a formula sheet, but he was of the mindset of “If you know F=ma, you can derive anything you need!” so the formula sheets were sparse to put it mildly. It was just enough to keep me from fully collapsing in panic and bombing, but it was close.

          • HexesofVexes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            Thanks for the info!

            If you’d been able to take 4 sides (A4) of written notes in, would this have helped mitigate the stress?

            What do you feel would have been a better method of assessment?

            • mrspaz@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Being able to bring my own formula sheet (or notes) definitely helped. Two full pages of notes would be great, though I would still get some bad nerves even in those cases (the very idea that the next 60 minutes of class time decides a full 30% of the course grade just rattled me bad).

              For me the ideal type of course would be the Thermodynamics of Mechanical Systems course I took. The exams were in-person but open-note and straightforward with relatively simple conceptual questions. Credit was split between the exams and bi-monthly “mini projects.” These would ask you to apply the class concepts to some larger set of related problems; parameters were provided and you would have to determine the answers using what was learned in class (for example, one project was to design a steam turbine power plant with a target output of 50MW, ambient temperature was 30C, cooling water is available at 25C. Determine the heat input needed from the boiler, choose an appropriate number of turbine stages with reheat if possible, size the condenser appropriately and add economizers if they can be used. You’d lay it all out and indicate the temperatures, pressures, power inputs and outputs, exergy of the system, etc.)

              I did stellar in that class. I would have loved that format everywhere (simple concept exams + application projects).

          • assassin_aragorn@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            The true engineering experience is exams that ask you derive God after your homework was just 2+2. I remember hearing a rumor once that the exams were to find students who would be good to help with the professor’s research.

            Now that I’m on the other side of the degree with a couple years, I do think those tests were the crucible that turned us into engineers. Working through daunting, impossible questions under stress is how we developed our problem solving capability.

            I do think though there’s vast improvements to still be made. It’s highlighted in just how many of us have anxiety and depression and become nervous wrecks. Make sure to take care of yourself and see professionals to help with that, if you need it.

    • Spike@feddit.de
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      11 months ago

      We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

      Could you ever though, when giving them work they had to do not in your physical presence? People had their friends, parents or ghostwriters do the work for them all the time. You should know that.

      This is not an AI problem, AI “just” made it far more widespread and easier to access.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        “Sometimes” would be my answer. I caught students who colluded during online exams, and even managed to spot students pasting directly from an online search. Those were painful conversations, but I offered them resits and they were all honest and passed with some extra classes.

        With AI, detection is impossible at the moment.

      • SamC@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Of course people could do that, but you have to find someone willing to do it or be able to afford to pay for it. It happens but it’s maybe 5% or students or less.

        There is always going to be cheating. But once it becomes nearly impossible to detect, and maybe 30% or more people are doing it, the system breaks down.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        “Avoid at all costs because we hate marking it even more than you hate writing it”?

        An in person exam can be done in a locked down IT lab, and this leads to a better marking experience, and I suspect a better exam experience!

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      15
      ·
      11 months ago

      Is AI going to go away?

      In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?

      What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?

      Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn’t allowed?

      I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.

      Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.

      For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?

      I get that it’s difficult to adjust to something that’s changed everything in the field within months.

      But it’s quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.

      • SkiDude@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        11 months ago

        If you’re going to take a class to learn how to do X, but never actually learn how to do X because you’re letting a machine do all the work, why even take the class?

        In the real world, even if you’re using all the newest, cutting edge stuff, you still need to understand the concepts behind what you’re doing. You still have to know what to put into the tool and that what you get out is something that works.

        If the tool, AI, whatever, is smart enough to accomplish the task without you actually knowing anything, what the hell are you useful for?

      • orangeboats@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        11 months ago

        As an anecdotal though, I once saw someone simply forwarding (ie. copy and pasting) their exam questions to ChatGPT. His answers are just ChatGPT responses, but paraphrased to make it look less GPT-ish. I am not even sure whether he understood the question itself.

        In this case, the only skill that is tested… is English paraphrasing.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        11 months ago

        I’ll field this because it does raise some good points:

        It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:

        1. For topics aimed at 10–18 year olds, it does pretty well. It’s answers are generic, and it makes mistakes every now and then.

        2. For 1st–3rd year degree, it really starts to make dangerous errors, but it’s a good tool to summarise materials from textbooks.

        3. Masters+, it spews (very convincing) bollocks most of the time.

        Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).

        The irony is, education in its current format is already working with AI, it’s teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).

        However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we’re able to hire folks to help develop better courses - at the moment every “great course” you’ve ever taken was paid for in blood (i.e. 50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).

        • zephyreks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          11 months ago

          On the other hand, what if the problem is simply one that’s no longer important for most people? Isn’t technological advancement supposed to introduce abstraction that people can develop on?

          • average650@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 months ago

            The point is the students can’t get to the higher level concepts if they’re just regurgitating from what chatgpt says.

          • MBM@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            If you never learn how to do the basics without ChatGPT, it’s a big jump to figure out the advanced topics where ChatGPT no longer helps you

        • Armok: God of Blood@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          11 months ago

          (1) seems to be a legitimate problem. (2) is just filtering the stronger students from the weaker ones with extra steps. (3) isn’t an issue unless a professor teaching graduate classes can’t tell BS from truth in their own field. If that’s the case, I’d call the professor’s lack of knowledge a larger issue than the student’s.

          • jarfil@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            You may not know this, but “Masters” is about uncovering knowledge nobody had before, not even the professor. That’s where peer reviews and shit like LK-99 happen.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              It really isn’t. You don’t start doing properly original research until a year or two into a PhD. At best a masters project is going to be doing something like taking an existing model and applying it to an adjacent topic to the one it was designed for.

      • AeroLemming@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        I get what you’re saying. I think that programming tests on paper are ridiculous too and that testing people’s ability to write flawless code without a linter is not meaningful in the modern era. AI is a bit different for two reasons.

        1. It’s not reliable. If someone uses AI too heavily, they’re going to be entirely unable to work if their AI of choice spits out the wrong answer or goes offline, both of which happen a LOT. Someone needs to be able to create and verify their own code.

        2. When you write an essay about a topic, it’s not about showing that you can produce an essay about that topic, it’s about showing that you have a well-rounded and decent understanding of the topic and therefore can likely apply it in real-world situations (which are harder to test for). AI provides no value in real-world applications of many subjects you may find yourself writing about.

      • pinkdrunkenelephants@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Textbooks, like on physical paper, are never going to just go away. They offer way too many advantages over even reading digital books.

    • Phoebe@feddit.de
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      19
      ·
      11 months ago

      Sorry but it was never about OUR abilility in the firts place.

      In my country exams are old, outdated and often way to hard. In my country all classes are outdated and way to hard. It often feels that we are stucked in the middle of the 20th century.

      You have no change when you have a disability. When you have kids, parents to take care of. Or hell: you have to work, cause you can’t effort university otherwise.

      So i can totaly understand why students feel the need to use AI to survive that torture. I don’t feel sorry for an outdated university system.

      When it is about OUR abilility, then create a System that is for students and their needs.

      • pinkdrunkenelephants@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        11 months ago

        If you can use AI to do the homework for you, we can use AI to do the job instead of you

        For the love of god, think beyond yourself for one damn minute

        • jarfil@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Precisely. Homework and tests that can be solved by an AI, are useless, nobody will hire you to do any of it when they can just plug in an AI.