Most programming languages today fall into the paradigms of native machine code compiled ones like Rust, C, C++, etc or Bytecode compiled like Kotlin, Java, C#, etc. Even some interpreted languages like Python can be thought of as Bytecode compiled since the interpreter store the bytecode which is executed instead of the source file unless the source file is changed.

I think the main benefit of bytecode compiled programming languages is that they’re usually platform independent as long as there’s a runtime for the platform you want to use, but I also don’t know how much this matters anymore, or whether the inefficiencies of bytecode makes it worth it.

What do you think? Should new programming languages always be native machine code compiled, like Rust or C?

  • @BlackCentipede@lemmy.ml
    link
    fedilink
    33 years ago

    The opposite actually, there’s a reason for this. It’s easier to ensure that the program you built for one platform would work on other platform even on different architectures where pointer size might be different and you might have different CPU extensions being applied. So it’s nice to have a runtime make those determination and compile the code that works on the machine it’s running on.

    There are some Pro/Con to runtime-based language.

    The Pros:

    1. Meta-programming capability, when you have IR representation, it’s pretty close to source-code representation, therefore you can dynamically emit new code and adjust your program behavior while it’s running on the fly. There is a drawback to this however as you can see listed in the Cons.

    2. Easier Deploy-ability (We don’t have to rewrite our code as much to use existing extensions on deployed machine when runtime handles that for us.)

    3. Tiered Compilation, you can immediately JIT the iR code to machine code that might be inefficient at first, but runtime can go back to that when the program is not using the resource as intensive as it normally would and then apply various optimization and analysis passes on the IR code for better native code compilation.

    The Cons:

    1. Your runtime-based program become very incredibly easy to reverse engineer, because the IR is pretty close to source code representation. A way around it is that you could write your confidential code be compiled native and then expose the API to the embedded runtime so user can use such API with C# or Java or whatever.

    2. Performance as explained above, you might not get the best performance for your program right out of the gate, and on top of that, in C# application, they actually run an interpreter on the first method like static void Main rather than JIT for performance reasons. There are some push for AOT compilation stages for C# and Java partially for performance and for faster start up time.

    Those are the things I have on top of my head for now, but naturally topics like these are a lot more in-depth than this, so take this as an oversimplification as you will.

    • @AgreeableLandscape@lemmy.mlOPM
      link
      fedilink
      23 years ago

      Meta-programming capability, when you have IR representation, it’s pretty close to source-code representation, therefore you can dynamically emit new code and adjust your program behavior while it’s running on the fly.

      Doesn’t Rust have a comprehensive metaprogramming toolset despite being compiled? Though it does compile to LLVM IR first before going to machine code. Would a language that only compiles to IR be even better at metaprogramming?

      Tiered Compilation, you can immediately JIT the iR code to machine code that might be inefficient at first, but runtime can go back to that when the program is not using the resource as intensive as it normally would and then apply various optimization and analysis passes on the IR code for better native code compilation.

      I did hear that compiling the source code on the processor you’re going to run it on will produce a more optimized binary for the chip than one compiled on another machine, such as the binary provided by the developer. Is that true, and does that have something to do with your point?

      • @BlackCentipede@lemmy.ml
        link
        fedilink
        2
        edit-2
        3 years ago

        Doesn’t Rust have a comprehensive metaprogramming toolset despite being compiled? Though it does compile to LLVM IR first before going to machine code. Would a language that only compiles to IR be even better at metaprogramming?

        Sure, Rust could do that with LLVM IR, but when deployed on the server with compiled code, it doesn’t have it’s code in IR form unless you ship that with it explicitly which probably over-complicate your implementation significantly, runtime is basically shipping that by default with all of the toolset built for you to do it right out of the gate. The IR in C# have object oriented with generic IR rather than LLVM IR which is pretty “raw” as in no “wrapper” for OOP programming, so that a big difference there. The CIL (IR for C#) is closer to C# than it is to machine code if that make sense, that just how abstracted it is.

        I did hear that compiling the source code on the processor you’re going to run it on will produce a more optimized binary for the chip than one compiled on another machine, such as the binary provided by the developer. Is that true, and does that have something to do with your point?

        It’s really depends on the runtime, in CoreCLR that isn’t true, because it doesn’t apply optimization specific to your processor on the IR output in the source code to IR transformation (C# Compiler), but in IR to Machine code transformation (IE the Runtime itself) it would then apply various optimizations that works best on the processor you’re on that might not be fast on other processor. When you ship C# application, you’re usually shipping C# in an IR form, not native. Everytime you run C# program that isn’t AOT compiled, you’re basically spinning up a JIT compiler and have it compile your C# program code on the fly before it run the first code.

    • Ephera
      link
      fedilink
      23 years ago
      1. Easier Deploy-ability (We don’t have to rewrite our code as much to use existing extensions on deployed machine when runtime handles that for us.)

      I mean, instead of installing the runtime on the target machine, you could install a compiler there and compile it on the target machine.
      I know that we don’t really do that as an industry, but yeah, I never quite understood why we instead figured we should do general-purpose scripting languages.

      • @BlackCentipede@lemmy.ml
        link
        fedilink
        13 years ago

        They basically did that with .Net Framework for the most part of Windows history. (It come pre-installed since Vista days.) There are a lot of differences between runtime-based programming languages compared to low level compiled language when you get to the meat of it. Reflection is superior on runtime-based languages than say C/C++, Rust, or DLang otherwise I would not be using C# anymore.

  • Ephera
    link
    fedilink
    13 years ago

    I think, they’ll coexist. Statically compiled languages will largely be used for things where latency matters and runtime-based languages will do most of the rest.

    Because ultimately, hardware is relatively cheap, developer time isn’t. So, if you just need performance, don’t care for latency, you can very often solve that with stronger or more hardware.

    I mean, right now we already have people happily launching a whole browser as their runtime, without even all too clear advantages from that, because they’re just not too worried about the disadvantages either.
    And hardware will mostly just get stronger with time.

    So, while I don’t think Rust’s handful of frills to make things work without a runtime are all too bad, the hurdle for people to jump to a runtime-based language is really low.