• 0 Posts
  • 60 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • Yeah, I may be wrong but I think it usually comes down to a very specific kind of precision needed. It’s not meant to be hostile, I think, but meant to provide a domain-specific explanation clearly to those who need to interpret it in a specific way. In law, specific jargon infers very specific behaviour, so it’s meant to be precise in its own way (not a law major, can’t say for sure), but it can seem completely meaningless if you aren’t prepped for it.

    Same thing in other fields. I had a professor who was very pedantic about {braces} vs [brackets] vs (parentheses), and it seemed totally unnecessary to be so corrective in discussions, but when explaining where things went wrong with a student’s work it was vital to be able to quickly differentiate them in their work so they could review the right areas or understand things faster during a lecture later down the line.

    But that noise takes longer to teach through, so if it is important, it needs it’s own time to learn, and it will make it inaccessible to anyone who didn’t get that time to learn and digest it.


  • Absolutely! One of the difficulties that I have with my intro courses is working out when to introduce the vocabulary correctly, because it is important to be able to engage with the industry and the literature, but it adds a lot of noise to learning the underlying concepts and some assessments end up losing sight of the concept and go straight to recalling the vocab.

    Knowing the terms can help you self-learn, but a textbook glossary could do the same thing.


  • PixelProf@lemmy.catoScience Memes@mander.xyzCalculus made easy
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 months ago

    There was a lovely computer science book for kids I can’t remember the name of, and it was all about the evil jargon trying to prevent people from mastering the magical skills of programming and algorithms. I love these approaches. I grew up in an extremely non/anti-academic environment, and I learned to explain things in non-academic ways, and it’s really helped me as an intro lecturer.

    Jargon is the mind killer. Shorthands are for the people who have enough expertise to really feel the depths of that shorthand and use it to tickle the old familiar neurons they represent without needing to do the whole dance. It’s easy to forget that to a newcomer, the symbol is just a symbol.





  • My two cents, after years of Markdown (and md to PDF solutions) and LaTeX and a full two years of trying to commit to bashing my head against Word for work purposes, I’m really enjoying Typst. It didn’t take long to convert my themes, having docs I can import which are basically just variables to share across documents in a folder has been really helpful. Haven’t gone too deep into it but I’m excited to give it a deeper test run over the next little bit.


  • Lots of immediate hate for AI, but I’m all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the “general purpose” LLMs, they’d be much more efficient at their jobs. I’ve been rocking local LLMs for a while and they’ve been great as a small compliment to language processing tasks in my coding.

    Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I’m sure there’s many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.

    If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but “we won’t send your data anywhere” seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.

    But there’s a lot of speculation in this comment. Mozilla’s done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn’t lead things astray too hard.


  • Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

    On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.


  • I researched creative AI and how AI can help people be creative, people thought it was a ridiculous and pointless topic. I’m biased.

    Firstly, I think it’s important to see the non-chat applications. Goblin Tools is a great example of code we just couldn’t have written before. Purely from an NLP perspective, these tools are outstanding, if imperfect.

    I’m excited to see new paradigms of applications come up when talented new developers are able to locally run LLMs and integrate them into their everyday programming, and too see what they can cook up in a world where that’s normal.

    I’m interested in LLMs not to generate data on the fly, but to pre-generate and validate massive amounts of content or data than we’d otherwise be able to for things like games.

    From a chat perspective, I like that it can support fleshing out ideas, parsing lots of data in a usable way.

    And finally I’m excited for how lightweight LLMs could affect user interface design. I could imagine a future where OSs have swappable LLMs like they have shells that can allow for natural language interfacing with programs.

    I don’t know, it’s just really accessible NLP, and that’s great.


  • I sit somewhere tangential on this - I think Bret Victor’s thoughts are valid here, or my interpretation of them - that we need to start revisiting our tooling. Our IDEs should be doing a lot more heavy lifting to suit our needs and reduce the amount of cognitive load that’s better suited for the computer anyways. I get it’s not as valid here as other use cases, but there’s some room for improvements.

    Having it in separate functions is more testable and maintainable and more readable when we’re thinking about control flow. Sometimes we want to look at a function and understand the nuts and bolts and sometimes we just want to know the overall flow. Why can’t we swap between views and inline the functions in our IDE when we want to see the full flow? In fact, why can’t we see the function inline but with the parameter variables replaced by passed values to get a feel for how the function will flow and compute what can be easily computed (assuming no global state)?

    I could be completely off base, but more and more recently - especially after years of teaching introductory programming - I’m leaning more toward the idea that our IDEs should be doubling down on taking advantage of language features, live computation, and co-operating with our coding style… and not just OOP. I’d love to hear some places that I might be overlooking. Maybe this is all a moot point, but I think code design and tooling should go hand in hand.






  • I appreciate the comment, and it’s a point I’ll be making this year in my courses. More than ever, students have been struggling to motivate themselves to do the work. The world’s on fire and it’s hard to intrinsically motivate to do hard things for the sake of learning, I get it. Get a degree to get a job to survive, learning is secondary. But this survival mindset means that the easiest way is the best way, and it’s going to crumble long-term.

    It’s like jumping into an MMORPG and using a bot to play the whole game. Sure you have a cap level character, but you have no idea how to play, how to build a character, and you don’t get any of the references anyone else is making.


  • This is a very output-driven perspective. Another comment put it well, but essentially when we set up our curriculum we aren’t just trying to get you to produce the one or two assignments that the AI could generate - we want you to go through the motions and internalize secondary skills. We’ve set up a four year curriculum for you, and the kinds of skills you need to practice evolve over that curriculum.

    This is exactly the perspective I’m trying to get at work my comment - if you go to school to get a certification to get a job and don’t care at all about the learning, of course it’s nonsense to “waste your time” on an assignment that ChatGPT can generate for you. But if you’re there to learn and develop a mastery, the additional skills you would have picked up by doing the hard thing - and maybe having a Chat AI support you in a productive way - is really where the learning is.

    If 5 year olds can generate a university level essay on the implications of thermodynamics on quantum processing using AI, that’s fun, but does the 5 year old even know if that’s a coherent thesis? Does it imply anything about their understanding of these fields? Are they able to connect this information to other places?

    Learning is an intrinsic task that’s been turned into a commodity. Get a degree to show you can generate that thing your future boss wants you to generate. Knowing and understanding is secondary. This is the fear of generative AI - further losing sight that we learn though friction and the final output isn’t everything. Note that this is coming from a professor that wants to mostly do away with grades, but recognizes larger systemic changes need to happen.


  • 100%, and this is really my main point. Because it should be hard and tedious, a student who doesn’t really want to learn - or doesn’t have trust in their education - will bypass those tedious bits with the AI rather than going through those tedious, auxiliary skills that you’re expected to pick up, and use the AI was a personal tutor - not a replacement for those skills.

    So often students are concerned about getting a final grade, a final result, and think that was the point, thus, “If ChatGPT can just give me the answer what was the point”, but no, there were a bunch of skills along the way that are part of the scaffolding and you’ve bypassed them through improper use of available tools. For example, in some of our programming classes we intentionally make you use worse tools early to provide a fundamental understanding of the evolution of the language ergonomics or to understand the underlying processes that power the more advanced, but easier to use, concepts. It helps you generalize later, so that you don’t just learn how to solve this problem in this programming language, but you learn how to solve the problem in a messy way that translates to many languages before you learn the powerful tools of this language. As a student, you may get upset you’re using something tedious or out of date, but as a mentor I know it’s a beneficial step in your learning career.

    Maybe it would help to teach students about learning early, and how learning works.



  • Education has a fundamental incentive problem. I want to embrace AI in my classroom. I’ve been studying ways of using AI for personalized education since I was in grade school. I wanted personalized education, the ability to learn off of any tangent I wanted, to have tools to help me discover what I don’t know so I could go learn it.

    The problem is, I’m the minority. Many of my students don’t want to be there. They want a job in the field, but don’t want to do the work. Your required course isn’t important to them, because they aren’t instructional designers who recognize that this mandatory tangent is scaffolding the next four years of their degree. They have a scholarship, and can’t afford to fail your assignment to get feedback. They have too many courses, and have to budget which courses to ignore. The university holds a duty to validate that those passing the courses met a level of standards and can reproduce their knowledge outside of a classroom environment. They have a strict timeline - every year they don’t certify their knowledge to satisfaction is a year of tuition and random other fees to pay.

    If students were going to university to learn, or going to highschool to learn, instead of being forced there by societal pressures - if they were allowed to learn at their own pace without fear of financial ruin - if they were allowed to explore the topics they love instead of the topics that are financially sound - then there would be no issue with any of these tools. But the truth is much bleaker.

    Great students are using these tools in astounding ways to learn, to grow, to explore. Other students - not bad necessarily, but ones with pressures that make education motivated purely by extrinsic factors than intrinsic - have a perfect crutch available to accidentally bypass the necessary steps of learning. Because learning can be hard, and tedious, and expensive, and if you don’t love it, you’ll take the path of least resistance.

    In game design, we talk about not giving the player the tools to optimize their fun away. I love the new wave of AI, I’ve been waiting for this level of natural language processing and generation capability for a very long time, but these are the tools for students to optimize the learning away. We need to reframe learning and education. We need to bring learning front and center instead of certification. Employers need to recognize this, universities need to recognize this, highschools and students and parents need to recognize this.