• 0 Posts
  • 34 Comments
Joined 9 months ago
cake
Cake day: January 25th, 2024

help-circle
  • Computers are a fundamental part of that process in modern times.

    If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.

    Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.

    Like spell check? Or grammar check?

    Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.

    If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.

    These are not even remotely comparable.

    Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?

    This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.

    I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.

    Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.

    On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.

    I am incapable of ever checking, proofreading, or even conceptualizing the output.

    If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.

    We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.

    Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?

    If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.


  • ArchRecord@lemm.eetoScience Memes@mander.xyzClever, clever
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    6 days ago

    Schools are not about education but about privilege, filtering, indoctrination, control, etc.

    Many people attending school, primarily higher education like college, are privileged because education costs money, and those with more money are often more privileged. That does not mean school itself is about privilege, it means people with privilege can afford to attend it more easily. Of course, grants, scholarships, and savings still exist, and help many people afford education.

    “Filtering” doesn’t exactly provide enough context to make sense in this argument.

    Indoctrination, if we go by the definition that defines it as teaching someone to accept a doctrine uncritically, is the opposite of what most educational institutions teach. If you understood how much effort goes into teaching critical thought as a skill to be used within and outside of education, you’d likely see how this doesn’t make much sense. Furthermore, the heavily diverse range of beliefs, people, and viewpoints on campuses often provides a more well-rounded, diverse understanding of the world, and of the people’s views within it, than a non-educational background can.

    “Control” is just another fearmongering word. What control, exactly? How is it being applied?

    Maybe if a “teacher” has to trick their students in order to enforce pointless manual labor, then it’s not worth doing.

    They’re not tricking students, they’re tricking LLMs that students are using to get out of doing the work required of them to get a degree. The entire point of a degree is to signify that you understand the skills and topics required for a particular field. If you don’t want to actually get the knowledge signified by the degree, then you can put “I use ChatGPT and it does just as good” on your resume, and see if employers value that the same.

    Maybe if homework can be done by statistics, then it’s not worth doing.

    All math homework can be done by a calculator. All the writing courses I did throughout elementary and middle school would have likely graded me higher if I’d used a modern LLM. All the history assignment’s questions could have been answered with access to Wikipedia.

    But if I’d done that, I wouldn’t know math, I would know no history, and I wouldn’t be able to properly write any long-form content.

    Even when technology exists that can replace functions the human brain can do, we don’t just sacrifice all attempts to use the knowledge ourselves because this machine can do it better, because without that, we would be limiting our future potential.

    This sounds fake. It seems like only the most careless students wouldn’t notice this “hidden” prompt or the quote from the dog.

    The prompt is likely colored the same as the page to make it visually invisible to the human eye upon first inspection.

    And I’m sorry to say, but often times, the students who are the most careless, unwilling to even check work, and simply incapable of doing work themselves, are usually the same ones who use ChatGPT, and don’t even proofread the output.





  • I prefer using the self checkout, I don’t consider it work, because I also consider it work to mentally deal with meaningless small talk, and to deal with waiting in line for ten minutes when I’m buying just a few items.

    You might feel like it’s work for you, and that’s fine. You can then use the staffed checkout lanes, which are explicitly there for anyone who dislikes doing self checkout.

    The problem isn’t doing “work” by using self checkouts, the problem is capitalist cost-cutting, which would be done with or without self checkout machines.



  • And on top of that, even in cases where it is demonstrably true that any given group/population/region, say, does more crime than the average, it almost always boils down to the fault being laid on the existing discrimination against that group causing further harm.

    Like how racists will say that black people do more crime because they’re fatherless, (and that it’s a result of their culture that causes the fatherlessness) but don’t see the problem with specifically over-policing those neighborhoods and arresting the fathers they say need to be there for the kids, thus perpetuating the cycle in the first place.

    Even if it were true that, somehow, miraculously, trans people did indeed do more crime than the average for their gender or sex, they also face multiple times higher abuse rates than non-trans people, which is known to perpetuate cyclical violence. But yet, somehow, they still do the same amount of crime as everyone else (at least, comparative to their birth sex, generally.)




  • ArchRecord@lemm.eetoMemes@lemmy.mlAI bros
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I find those kinds of chatbots useful, but those aren’t the ones I encounter 90% of the time. Most of the time, it’s a chatbot that summarizes the help articles I just read, giving faulty interpretations of the source material, that then goes on to never direct me to a real person unless I tell it multiple times that the articles it’s paraphrasing aren’t helping. (and sometimes, they have no live support at all, and only an LLM + support articles)


  • ArchRecord@lemm.eetoMemes@lemmy.mlAI bros
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Oh yeah, it’s definitely useful for that!

    Since LLMs are essentially just very complicated probabilistic links between words, it seems to be extremely good at picking the exact word or phrase that even a thesaurus couldn’t get me.


  • ArchRecord@lemm.eetoMemes@lemmy.mlAI bros
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I primarily end up using LLMs through DuckDuckGo’s private frontend alongside a search, so if my current search doesn’t yield the correct answer to my question (i.e. I ask for something but those keywords only ever turn up search results on a different, but similar topic) then I go to the LLM and ask a more refined question, that otherwise doesn’t produce any relevant results in a traditional keyword search.

    I also use integrated LLMs to format and distill my offhand notes, (and reformat arbitrary text based on specific criteria repeatedly for structured notes,) learn programming syntax more at my own pace and in my own way, and just generally get answers on more well-known topics a lot faster than I would scrolling past 5 pages of SEO-“optimized” garbage just designed to fill time for the ads to load before actually giving me a good answer.


  • ArchRecord@lemm.eetoMemes@lemmy.mlAI bros
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 months ago

    I have never once found an “AI” feature integrated by a corporation useful.

    I have only ever found “AI” useful when it’s unobtrusive, and something I chose to use manually. Sometimes an LLM is useful to use, but I don’t need it shilled to me inside a search bar or in a support chat that won’t solve my problem until I bypass the LLM.







  • ArchRecord@lemm.eetoAtheism@lemmy.mlBook Club
    link
    fedilink
    arrow-up
    5
    ·
    3 months ago

    You decide. We all decide.

    On an individual basis, you can decide if you think an action is ethical or not based on if it, for instance, causes harm, and you dislike causing harm to others.

    As a society, we broadly come to a consensus on what we consider ethical or not by majority opinion, and turn those into laws. It’s why murder is considered wrong, in both religious and non-religious institutions and societies at large.

    For example, as a society, we deemed killing other humans to be wrong because then we would be at risk of being killed, and it made it harder for us to survive overall. Those who killed were ostracized, those who didn’t were not. No religion was required to form such a belief, but it can certainly be a part of religious teachings.

    You can use the Bible as a framework for how you decide what’s moral or not, but it’s not the only way to do so.