OP here. This thread turned out really nice!>>1669>sick lib bro
Thanks, really appreciate it.>>1664>How do you measure energy consumed? Or do you just go by "less cycles = less energy"?
I don't measure it in general, I'm not autistic enough for that. However, if an algorithm runs faster, uses less memory, etc., then it usually runs as good on cheaper hardware as a bad algorithm on expensive hardware. If it runs fine on a Raspberry Pi, then I don't have to use a laptop or tower for it. Depending on the impact of your software, this can make a huge difference. For projects that I am serious about, I always try to keep the demands on the hardware as little as possible. Although this mindset leads to me almost never finishing anything nontrivial, I can bear with it, because it feels right to me (crazy, I know).>>1661
I'm working on (for years now, sporadically) a language that basically has all the good features of C++ while having a clean syntax (type 2 grammar), so it becomes easy to parse. I hate how, in C++, you cannot correctly highlight a program's syntax using a simple grammar, because you need a full blown parser to recognise which name is a type and which name is a value. This semantic ambiguity is something that I don't like.
A * b;
Did I just create a pointer
, or did I multiply variables
Cleaning up the syntax aside, I'll probably also integrate coroutines, but in a way that allows the user to control how an where the stackframe is allocated. This is the one thing that I like the least about the C++ coroutine TS. You can even control which allocator your vectors use, but you can't control anything about coroutines, which is a shame. And the design seems to enforce that you allocate stacks for every coroutine, which is pretty resource-consuming, once you ramp up the concurrency.
I will probably also add lazy evaluation as a native feature, so that people can finally do stuff like implementing early-out in their overloaded
operator, which is currently impossible in C++.>>1658
The phenomenon of people just throwing hardware at problems is one of the reasons why I am very hesitant to upgrade my computer. But I'll have to go from 4GiB RAM to 8GiB, because even having open Sublime Text, Firefox, and VLC player is enough to bring my PC to a crawl after a few hours. It seems like Mozilla bloated their browser so much as time went on, that with every update, it needs more RAM. A few years ago, I never had any problems with the amount of RAM I have, except maybe when playing video games.
Meanwhile, people buy 64GiB RAM for their laptops…>>1659
That's pretty cool. I never got deep enough into all the SIMD stuff, regrettably.>every algorithm's "end-game" is to be implemented in hardware.
I am trying to keep the language designed such that it is easy to program for custom hardware: You can declare a
, which basically is just a byte array and a set of functions on it (which might be inline assembly instructions). Thus, you could implement native 32-byte floating point types with ease, if the hardware supports it. This feature is inspired by Bluespec, but I am unsure whether I should go even further, so that you can actually design hardware using my language. I'm kind of torn about it because it will most probably clash with its usability as a software programming language.