NSO News

Latest US news, world news, sports, business, opinion, analysis and the world's leading liberal voice.

Ask HN: What startup/technology is on your to watch list?

43 min read
https://news.ycombinator.com/item?id=23276456

Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there’s progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

UE5’s rendering approach. They finally figured out how to use the GPU to do level of detail. Games can now climb out of the Uncanny Valley.

The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That’s a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We’ve known how for a long time.

Electric cars taking over. The Ford F-150 and the Jeep Wrangler are coming out in all-electric forms. That covers much of the macho market. And the electrics will out-accelerate the gas cars without even trying hard.

Utility scale battery storage. It works and is getting cheaper. Wind plus storage plus megavolt DC transmission, and you can generate power in the US’s wind belt (the Texas panhandle north to Canada) and transmit it to the entire US west of the Mississippi.

> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That’s a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

Mmm, this sounds like exactly what people said at the time the PS3 was going to be released, and I can only recall of one example where the PS3 was ever used in a cluster and that probably was not that very useful in the end.

PPC was ok, the killer was that you had to write code specifically for the Cell (co)processors and their limited memory addressing if you wanted the promised compute performance.

This exactly.

The PS5 and Xbox One X are commodity PC hardware, optimized for gaming, packaged with a curated App Store.

Sony also won’t just sell you hundreds or thousands of them for some kind of groundbreakingly cheap cluster. They will say no, unless you’re GameStop or Walmart.

Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The PS5 is going to equivalent to a mid-range $100 AMD CPU, something not as good as an RTX 2080 or maybe even an RTX 2070, and a commodity NVME SSD (Probably cheap stuff like QLC) that would retail for about the same price as a 1TB 2.5” mechanical hard drive. It is not unique.

Data center servers optimize for entirely different criteria and game consoles do not make sense for anything coming close to that sort of thing. For example, servers optimize for 24/7 use and high density. The PS4 doesn’t fit in a 1U rack. It doesn’t have redundant power. Any cost savings on purchase price is wasted on paying your data center for the real estate, no joke. Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.

Gpu solved LOD won’t save video games from the uncanny valley. In some cases it will make it worse. It makes for nice statues and static landscapes though.

> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We’ve known how for a long time.

Would love some links to read over weekend. Thanks!

Things like:

– std::string_view

– std::span

– std::unique_ptr

– std::shared_ptr

– std::weak_ptr (non owning reference to shared_ptr, knows when the parent is free’d)

– ranges

– move semantics

– move capture in lambdas

– std::variant

– std::optional

To be honest, learning rust has made me a better c++ programmer as well. Having to really think about lifetimes and ownership from an API perspective has been really neat. It’s not so much that I wasn’t concerned about it before, more that I strive to be more expressive of these conditions in code.

Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

However I feel like most of the heavy lifting features came with C++11.
Span, optional, variant and string_view are nice additions to the toolkit but more as enhancements rather than the paradigm shift of C++11 (move, unique_ptr, lambdas et-al).

> Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

It’s funny, because while it’s certainly become more influential lately, that subculture existed as a niche in the C++ world before Rust and before C++11. So much so that when I first heard about Rust I thought “these are C++ people.”

The original (and long dead) Rust irc channel used to be full of C++ people chatting with OCaml people. Those were the days 🙂

lol i start most of my big objects by deleting the copy constructor and adding a clone member func 😛

Yeah, if anything, C++ is getting less serious about safety by piling features over features. Just write Rust instead.

Static analysis tools like PVS Studio are amazing. Software verification like CompCert where the compilation includes a certificate of correctness are farther away for C++ but will someday be usable for it.

> That’s a lot of compute engine for $400.

So excited for this as a PC gamer, hardware prices are going to have to plummet. I don’t think supercomputers a likely, the PS2 was a candidate because there was [initially] official support for installing Linux on the thing. Sony terminated that support and I really can’t imagine them reintroducing it for the PS5.

Sony’s only interest is to do a single deployment, using a customized OS and firmware, and then get as many articles out of the project as possible.

They have zero incentive to subsidize supercomputers. They’re in the business of trading hardware for royalty, store, and subscription payments.

And if they do it would be wise not to trust them, because dropping support for advertised features with hardware-fused irreversible software updates is SOP at this point. FFS, they even dropped support for my 4K screen in an update and I wound up playing the back half of Horizon Zero Dawn in 1080p as a result.

If I were driving I’d definitely stop for the person in the road projection at https://youtu.be/1cSw4fXYqWI?t=85

LiDAR also isn’t a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.

LIDAR vs camera is a red herring. The fact that Elon and his fan club fixate on this shows you how little they understand about self driving. The fundamental problem is that there is no technology that can provide the level of reasoning that is necessary for self driving.

Andrej Karpathy’s most recent presentation showed how his team trained a custom detector for stop signs with an “Except right turn” text underneath them [0]. How are they going to scale that to a system that understands any text sign in any human language? The answer is that they’re not even trying, which tells you that Tesla is not building a self-driving system.

[0] https://youtu.be/hx7BXih7zx8?t=753

That same video said they trained a CNN to recognize phantoms using purely video feed and achieved a high accuracy with AUC ~ 0.99.

Self driving cars can’t rapidly move their cameras in multiple spatial directions like humans do on a continuous basis.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

I learned to drive a car when I was 13. My older cousin took me to warped tour, got hammered and told me I had to drive home. I didn’t know what a clutch was, let alone a stick shift. After stalling in the parking lot a couple of times, I managed to drive us from Long Beach all the way back to my parents house in Pasadena. Love to see an AI handle that cold start problem.

Self-driving cars could work more like a hive mind. Humans can share ideas, but not reflexes and motor memory. So we practice individually, and we’re great at recognizing moving stuff, but we never get very good at avoiding problems that rarely happen to us.

And we know we shouldn’t drive tired or angry or intoxicated but obviously it still happens.

On driving? I would posit that most humans are vastly overrated.

I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.

Anecdotally, I’m seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.

(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn’t why … that would be really depressing.)

No they’re underrated. We all know the stats. Driving isn’t the safest activity. Having said that there’s a lot of wishful thinking that the current state of ML can do any better if we were to just put them on the roads today as-is.

I have a AP 2.5 Model 3. It will never be fully self driving. It still has trouble keeping lanes when the stripes are not simple. It still does phantom brakes

WRT F150;

I am so upset with the state of the auto market when it comes to pricing.

Manufacturing margins are enormous when it comes to cars.

The F150 is no different.

A two seater (effectively) vehicle stamped out of metal and plastic should never cost as much as those things do.

I hate car companies and their pricing models.

Look up the chicken tax bill that passed a few decades ago that basically stopped any foreign car manufacturers from selling pickups in the US. That’s why trucks are so much more expensive than other types of cars.

Because small trucks require more fuel- and emissions-efficient engines than larger ones.

Oxide Computer Company

https://oxide.computer/

“True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.”

Corey Quinn interviewed the founders on his podcast “Screaming in the Cloud”, where they explain the need for innovation in that space.

https://www.lastweekinaws.com/podcast/screaming-in-the-cloud…

Basically, on-premises hardware is years behind what companies like Facebook and Google have in-house, it may be time to close that gap.

They also have a podcast, “On The Metal”, which is such a joy to listen to. Their last episode with Jonathan Blow was really a treat.

https://oxide.computer/podcast/

It’s mostly anecdotes about programming for the hardware-software interface, if that’s your thing ;).

And for people wondering why caring about on-premises hosting when you have the cloud, a few weeks ago there was a thread about why would you do the former in favor of the latter. It puts on display that actually a lot of people are still on-premises, and for good reasons, which makes a good case for a company like Oxide to exist.

https://news.ycombinator.com/item?id=23089999

Web Assembly

It’s interesting in a bunch of ways, and I think it might end up having a wider impact than anyone has really realized yet.

It’s an ISA that looks set to be adopted in a pretty wide range of applications, web browsers, sandboxed and cross platform applications, embedded (into other programs) scripting, cryptocurrencies, and so on.

It looks like it’s going to enable a wider variety of languages on the web, many more performant than the current ones. That’s interesting on it’s own, but not the main reason why I think the technology is interesting.

Both mobile devices, and crypto currencies, are places where hardware acceleration is a thing. If this is going to be a popular ISA in both of those, might we get chips whose native ISA is web assembly? Once we have hardware acceleration, do we see wasm chips running as CPUs someday in the not too distant future (CPU with an emphasis on Central)?

A lot of people seem excited about the potential for risc-v, and arm is gaining momentum against x86 to some extent, but to me wasm actually seems best placed to takeover as the dominant ISA.

Anyways, I doubt that thinking about this is going to have much direct impact on my life… this isn’t something I feel any need to help along (or a change I feel the need to try and resist). It’s just a technology that I think will be interesting to watch as the future unfolds.

I want to believe…
I always thought WebAssembly had a lot of potential, however, in practice it doesn’t seem to have turned out that way.

I remember the first Unity demos appearing on these orange pages at least 4 or 5 years ago, and promptly blowing me away. But, after an eternity in JavaScript years, I still dont know what the killer app is, technically or business wise. (Side note – I encourage people to prove me wrong, in fact I’d love to be! Thats whats so engaging about discussions here. I’d love to see examples of what WebAssembly makes possible that wouldn’t exist without it.)

I can tell you about a WebAssembly killer app for a small niche. lichess uses WebAssembly to run a state-of-the-art chess engine inside your browser to help you analyze games [1]. Anyone who wants to know what a super-human chess player thinks of their game can fire it up on their desktop, laptop, or even phone (not highly recommended, it’s rough on your battery life).

Obviously very serious chess players will still want to install a database and engine(s) on their own computer, but for casual players who just occasionally want to check what they should have done on move eleven to avoid losing their knight it’s a game changer.

[1] https://lichess.org/analysis

I think chess.com has something similar too, but not sure if it’s powered by wasm.

If it’s not, I’d be interested to see a speed and feature comparison between the two.

I think there might be killer apps that companies aren’t publicizing, because it’s part of their competitive advantage.

Example of WASM being used in a major product:

https://www.figma.com/blog/webassembly-cut-figmas-load-time-…

You can infer from this that it’s making them 3x faster than anything a competitor can make, and probably inspired a lot of those ‘Why is Figma so much more awesome than any comparable tool?’ comments I remember reading on Twitter months back.

An example I can give:

I use WebAssembly for a few cross-platform plugins. E.g. An AR 3D rendering engine in C++ and OpenGL. With very little effort it is working in browser. No bespoke code, same business logic, etc. Saved a lot of time vs creating a new renderer for our web app.

For me it allows a suite of curated plugins which work cross-platform. The web experience is nearly just as nice as the native mobile and desktop experience. This in turn increases market growth as more of my clients prefer web vs downloading an app (which is a large blocker for my users). I also enjoy the code reuse, maintainability, etc, 🙂

Another:

This year Max Factor (via Holition Beauty tech) won a Webby award for in-browser AI and AR. This was used to scan a users face, analyse their features, advise them on what make up, etc, would suit them, after which the user can try it on. This would have been impossible without WebAssembly.

This tech is also used by another makeup brands beauty advisors (via WebRTC) to call a customer and in real-time advise them on their make up look, etc.

Is this tech necessary? Probably not, but it is a lot nicer than having to go to a store. Especially when we are all in lockdown 🙂

1) https://www.holitionbeauty.com/

2) https://winners.webbyawards.com/?_ga=2.215422039.1334936414….

3) https://www.maxfactor.com/vmua/

I build a slower version of something with the same idea 13-14 years ago in Flash for http://www.makeoversolutions.com which most of these makeup companies licensed back then.

I moved on from that a decade ago but it was a neat project at the time.

But I deployed my first integration of WASM about a month ago for PaperlessPost.com. It is a custom h264 video decoder that renders into a canvas that manages timing relative to other graphics layers over this video. This code works around a series of bugs we’ve found with the built in video player. It went smoothly enough that we are looking into a few other hot spots in our code that could also be improved with WASM.

One avenue for WASM might be simply polyfilling the features that are not consistently implemented across browsers.

> I’d love to see examples of what WebAssembly makes possible that wouldn’t exist without it.

I’ve been playing with WebAssembly lately and the moment where it clicked for me how powerful it was was building an in-browser crossword filler (https://crossword.paulbutler.org/). I didn’t write a JS version for comparison, but a lot of the speed I got out of it was from doing zero memory allocation during the backtracking process. No matter how good JS optimization gets, that sort of control is out of the question.

I also think being able to target the browser from something other than JS is a big win. 4-5 years is a long time for JS, but not a long time for language tooling; I feel like we’re just getting started here.

If you’re looking for a real world example of Webassembly being used in production at a large scale for performance gains, check out Figma. Their editor is all wasm based, and is really their secret sauce.

I agree! WASM is very interesting. Blazor is an exciting example of an application of Web Assembly – it’s starting out as .net in the browser, but you can imagine a lightweight wasm version of the .net runtime could be used in a lot of places as a sandboxed runtime. The main .net runtime is not really meant to run unprivileged. It would be more like the UWP concept that MS made to sandbox apps for the windows App Store, but applicable to all OSes.

One thing I haven’t heard much about is the packaging of wasm runtimes. For example, instead of including all of the .net runtime as scripts that need to be downloaded, we could have canonical releases of major libraries pre-installed in our browsers, and could even have the browser have pre-warmed runtimes ready to execute, in theory. So if we wanted to have a really fast startup time for .net, my browser could transparently cache a runtime. Basically like CDN references to JS files, but for entire language runtimes.

This would obviate the need for browsers to natively support language runtimes. It’s conceptually a way to get back to something like Flash or SilverLight but with a super simple fallback that doesn’t require any plugin to be installed.

I look forward to in browser DLL hell /s

I’m cautiously optimistic about blazor, it definitely makes streaming data to the Dom much easier

Blazor seems like the only one application of WASM at the moment that goes in the completely wrong direction.

People are already whining about JS bundle size and even the small .net runtimes are >60kb.

Yew on the other hand seems to fit right into what WebAssembly was made for.

Cautionary tale: we’ve been here before with JVM CPUs like Jazelle. They didn’t take over the world.

Absolutely, but there’s been plenty of technologies where the time wasn’t right the first time around, but it was the second, or third, or fourth.

Even closer to home. Palm, RIM, Microsoft, Apple and Google have all said at one point that web apps were the answer for mobile apps….

I’d argue somewhat the opposite. Because WebAssembly is abstract but low level, it makes it really easy for a platform to optimize specifically for that platform, so instead of creating a need for specific platforms, it’ll allow more diverse systems to run the same “native” blobs.

Wasm was not designed to be a hardware accelerated ISA. It was designed as an IL/bytecode target like JVM and .NET.

Even if it were, there is an extremely high bar to meet for actual new ISAs/cores. There is no chance for Wasm to compete with RISC-V, Arm or x86.

> It’s an ISA that looks set to be adopted in a pretty wide range of applications, web browsers, sandboxed and cross platform applications, embedded (into other programs) scripting, cryptocurrencies,

Imagine if the crowd didn’t fall for the HODL hypers and called these things cryptolotteries or something like that — they are a betting game after all — how ridiculous would it look to include them in every discussion like this.

What are you adding to the discussion? This is a technical forum, the least you could do is comment on the use of Web Assembly in Ethereum or maybe anything of substance. There’s a bunch of technically interesting topics to bring up but somehow I doubt you know anything about them.

I speak up against cryptocurrency because it’s a cancer. It’s a hype adding to climate change without any real world use case whatsoever.

Have you looked deeper than just hodl memes and Bitcoin? Ethereum is a highly technical project that doesn’t really care about money and lots of people here on Hacker News find interesting topics regarding it. Web Assembly will be the base programming platform for example, which is one of the reasons he included it.

If you read about the Baseline protocol (EY, Microsoft, SAP etc building neutral interconnections between consortiums), ENS/IPFS, or digital identity systems you might find something that interests you and is more relevant than the mindless hodl ancaps. It’s actually a pretty exciting field to be in as a computer scientist with almost no end of boundary pushing experiments and cryptographic primitives to play with and build on top of.

Most new cryptocurrencies are moving away from PoW because a.) it’s a massive waste of electricity and b.) it’s not actually secure anyway, because we’ve seen a consolidation of mining power with major ASIC customers who have cheap power costs (notably in China). Ethereum’s moving to it in 2020 or 2021, and EOS, Stellar, Tezos, Cardano, etc. are already PoS or derivatives.

That potentiality has been there for many many years, I don’t see ‘the thing’ that provides the critical mass necessary to make it work in reality.

Web Assembly is one of the more misunderstood technologies in terms of it’s real, practical application.

At its core, it crunches numbers, in limited memory space. So this can provide some ‘performance enhancements’ possibly for running some kinds of algorithms. It means you can also write those in C/C++, or port them. Autodesk does this for some online viewers. This is actually a surprisingly narrow area of application and it still comes with a lot of complexity.

WA is a black box with no access to anything and how useful really is that?

Most of an app is drawing ‘stuff’ on the screen, storage, networking, user event management, fonts, image, videos – that’s literally what apps are. The notion of adding ‘black box for calculating stuff more quickly’ is a major afterthought.

At the end of the day, JS keeps improving quite a lot and does pretty well, it might make more sense to have a variation of this that can be even more optimized than building something ground up.

WASI – the standard WA interface is a neat project, but I feel it may come along with some serious security headaches. Once you ‘break out of the black box’ … well … it’s no longer a ‘black box’.

WA will be a perennially interesting technology and maybe the best example of something that looks obviously useful but in reality isn’t really. WA actually serves as a really great Product Manager’s instructional example to articulate ‘what things actually create value and why’.

It will be interesting to see how far we get with WASI.

I think you’re underestimating WASI. Projects like cloudABI, where an existing app is compiled against a libc with strong sandboxing, really cool things happen.

Thanks but the same thing was said about WASM and ASM.JS.

For 5 years we’ve been hearing about how great they are, except nobody is really using them.

So now, it’s ‘the next thing’ that will make it great? Except that next thing isn’t there, not agreed upon or implemented, we don’t know so many things about it?

Like I say, this is textbook example of tech-hype for things probably not as valuable as they appear.

If (huge if) WASI were ‘great, functional, widespread, smoothly integrated’ – I do agree there’s more potential. But that this will really happen is questionable, and that even if it does happen, it will be valuable, is questionable.

Materialize https://materialize.io/
Incremental update/materialization of database views with joins and aggregates is super interesting. It enables listening to data changes, not just on a row level, but on a view level. It’s an approach that may completely solve the problem of cache invalidation of relational data. Imagine a memcache server, except it now also guaranties consistency. In addition, being able to listen to changes could make live-data applications trivial to make, even with filters, joins, whatever.

Similarly, someone is developing a patch for postgres that implements incrementally updating/materializing views[1]. I haven’t tried it so I can’t speak of its performance or the state of the project, but according to the postgres wiki page on the subject [2] it seems to support some joins and aggregates, but probably not something that would be recommended for production use.

[1] https://www.postgresql-archive.org/Implementing-Incremental-…
[2] https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc…

+1, very excited about this.

They’re marketing it in the OLAP space right now, but at some point I’d like to try integrating it with a web framework I’ve been working on.[1][2] It’d be a more powerful version of firebase’s real-time queries. Firebase’s queries don’t let you do joins; you basically can just filter over a single table at a time. So you have to listen to multiple queries and then join the results by hand on the frontend. Doesn’t work if you’re aggregating over a set of entities that’s too large to send to the client (or that the client isn’t authorized to see).

[1] https://findka.com/blog/migrating-to-biff/
[2] https://github.com/jacobobryant/biff

Thanks for the vote of confidence! One thing: We’re not marketing it in the OLAP space. Our existing users very much are building new applications.

Initially we went for the metaphor of “what if you could keep complex SQL queries (e.g. 6-way joins and complex aggregations, the kinds of queries that today are essentially impossible outside a data warehouse) incrementally updated in your application within milliseconds? What would you build?

We’re moving away from that metaphor because it seems it’s more confusing than helpfuL. Tips always appreciated!

Ah, thanks for the correction. In any case I’m looking forward to trying it out eventually–got a number of other things ahead in the queue though.

My suggestion would be consider comparing it to firebase queries. Firebase devs are already familiar with how incrementally updated queries can simplify application development a lot. But, despite Firebase’s best marketing attempts, the queries are very restrictive compared to sql or datalog.

Materialize is based on differential dataflow, that is based on timelly dataflow. The abstraction works like magic: distributed computation, ordering, consistency, storage, recalculation, invalidations… All those hard to since problems are handled naturally by the computing paradigm.
Maybe the product is similar, but not the principles behind

Rust lang – Memory safety through zero cost abstraction as a way to eliminate a large class of errors in systems languages is interesting. Especially if it allows more people to write systems programs.

WASM – Mostly as a compile target for Rust, but I think this changes the way software might be deployed. No longer as a website, but as a binary distributed across CDNs.

ZK-SNARKS – Zero knowledge proofs are still nascent, but being able to prove you know something while not revealing what it is has specific applicability for outsourcing computation. It’s a dream to replace cloud computing as we know it today.

Lightning Network – A way to do micropayments, if it works, will be pretty interesting.

BERT – Newer models for NLP are always interesting because the internet is full of text.

RoamResearch – The technology for this has been around for a while, but it got put together in a interesting way.

Oculus Quest – Been selling out during COVID. I sense a behavioral change.

Datomic – Datalog seems to be having a resurgence. I wonder if it can fight against the tide of editing in-place.

Rust is awesome and very eye opening and it’s a great alternative for almost any Golang use case, I just hope they prioritize enhancing compilation times if possible.

Datomic .. not just because of datalog, but because its hands down the best implementation of a AWS lambda based workflow I’ve seen (Datomic Ions). It’s such a peach to work with.

> Lightning Network – A way to do micropayments, if it works,

You can stop the tape right there. You know it doesn’t and it can’t.

btw since two weeks ago the official Oculus Quest store is not sold out anymore (although it might be sold out again, haven’t checked since it got back in store)

wrt to Datomic there’s also another Clojure DB using Datalog called Crux that’s pretty interesting. I built my most recent project on that.

Oxford nanopore sequencing. If a few problems can be figured out (mainly around machine learning and protein design), then it will beat every other biological detection, diagnosis, and sequencing method by a massive amount (no 10x, but more like 100x-1000x)

It’s hard to explain how big nanopore sequencing is if a few (hard) kinks can be figured out. Basically, it has the potential to completely democratize DNA sequencing.

Here is an explanation of the technology – https://www.youtube.com/watch?v=CGWZvHIi3i0

> Illumina has had such a stranglehold on the market – devices start at around 35k and go up into “this is a house now” territory.

You cannot effectively sell this kind of device under $25K–support costs simply eat your profit margin.

This is a constant across industries. You either have a $250 thneed (and you ignore your customers) or a $25K thneed (and you have shitty customer support) or a $250K thneed (and you have decent customer support).

Can this be used to make faster corona virus tests ? If so maybe this is the time to Manhattan project this technology.

Generally, yes absolutely. I’ve been doing a project called “NanoSavSeq” (Nanopore Saliva Sequencing) in my free time. It’s published on dat right now since the raw files for Nanopore are really big (got too big for hashbase). There is one company doing it as well, but my version is completely open source and I’ve optimized it for affordable automation.

To give you a sense, you can buy one for 1k and do as much detection as a 64k device, and it’s small enough to fit in a backpack. One device should be able to do 500-1000 tests per 24hrs at a cost of about $10 per test, not including labor.

Geometric algebra: https://www.youtube.com/watch?v=tX4H_ctggYo

It makes a lot of hard physics problems (Maxwell’s equations, relativity theory, quantum mechanics) much more understandable and (I’m told) unifies them in a common framework. I think it will help your average developer become comfortable with these areas of physics.

I can’t wait to read into this. Switching formulas to tau was incredibly useful for me when I was doing a lot of 3D math for game dev.

If by application database you mean to the level of an RDBMS then no. It’s a key-value data store. You get your CRUD operations, expiry, key listing and the ability to do pagination with keys always returning in lexicographically sorted order. Any relation data you want to have would be using the key prefixes, e.g.:

  article:1 -> {"user": 1, "content": "..."}
  user:1 -> {"name": "username"}
  user:1:articles -> [1, ...]

I’ve heard many good things about Cloudfare Workers.

Excuse my ignorance & N00Bness, but are they essentially a Cloudfare version of AWS Lambdas, Google Cloud Functions and Netlify functions, or are they something different/better?

IIRC Cloudflare Workers run at each Cloudflare PoP, which have higher geographical density than AWS regions, so latency experienced by end-users may be lower.

More lightweight. It’s just v8 so there’s basically no time for warm up time.

They have vastly more pops than Amazon, so global performance for these is on a different level. But they are also more limited in compute and serve a slightly different purpose.

Sidewalk delivery robots.

The problem is a lot easier that driverless cars (everything is slower and a remote human can take over in hard places) and huge potential to shake up the short-to-medium distance delivery business. It’s the sort of tech that could quickly explode into 100s of cities worldwide like escooters did a couple of years ago.

Starship Technologies is the best known company in the area and furthest advanced. https://www.starship.xyz/

We’ve been ordering through them once every couple weeks during the pandemic. It’s really cool. Even though the robot itself is really slow (takes a good 40 mins for a 1 mile journey), they’re usually pretty available and responsive, and so we’d get things faster than human-based platforms (who has to be available, then go to the pick up point, then deliver).

Subvocal recognition: https://en.wikipedia.org/wiki/Subvocal_recognition
Imagine how much more people would use voice input if they could do it silently.

Also neural interfaces like CTRL-labs was building before being acquired. Imagine if you could navigate and type at full speed while standing on the subway.

I think that rich, high fidelity inputs like those are going to be key to ambient computing really taking off.

Libresilicon [1]. Extremely important to our freedoms from corporate and state tyranny to make chip manufacturing libre.

> We develop a free (as in freedom, not as in free of charge) and open source semiconductor manufacturing process standard, including a full mixed signal PDK, and provide a quick, easy and inexpensive way for manufacturing. No NDAs will be required anywhere to get started, making it possible to build the designs in your basement if you wish so. We are aiming to revolutionize the market by breaking through the monopoly of proprietary closed source manufacturers!

[1] https://libresilicon.com/

Optical Coherence Tomography (OCT) occupies a intermediate position in accuracy / skin depth for soft tissue between ultrasound and MRI

Optically pumped magnetometers (OPM) approaches SQUID level accuracy without need for supercooled device, can be worn or used as a contact sensor like ultrasound.

LoRA long range (10km +) low power sub-gigahertz radio frequency protocol useful for battery powered IoT devices transmitting small amounts of data.

Heat cabinet for infectious diseases, an old technology used to fight polio and other diseases that went out of favor with introduction of antibiotics. May find utility against novel viral infections.

UV light treatment of blood. Another old technology that may find use against novel infectious agents. Stimulates immune system to fight viral infections.

Oh man, I used to research in OCT for deep brain stimulation! It’s pretty cool tech, that is for sure. It’s got a huge market for bio applications and certain industrials.

That said, optics is a super finicky field. You can come in and get a Nobel for 5 hours work, or you can spend 50 years in a dark room trying to get things together. Alignment is crazy difficult, thought it seems it shouldn’t be.

Anyone that wants to dive into optics: Just do it for 2 years, no more.

> UV light treatment of blood.

What? No… No don’t do this. This is a discarded idea from the era before molecular biology, and it was discarded for very good reason.

Opportunity to give people leukemia. These ideas are at a prehistoric level of biology. We’re way beyond this silliness now.

www.sens.org : Solving the problem of aging and diseases of aging. Watch a few interviews of Aubrey de Grey to get a better idea of the possibilities of their research. Though this would come under the “to watch” not for the immediate future but for the next decade or two.

GPGPU. GPU performance is still increasing along Moore’s Law, single-core performance has plateaued. The implication is that at some point, the differential will become so great that we’ll be stupid to continue running anything other than simple housekeeping tasks on the CPU. There’s a lot of capital investment that’d need to happen for that transition – we basically need to throw out much of what we’ve learned about algorithm design over the past 50 years and learn new parallel algorithms – but that’s part of what makes it exciting.

Arm and RISC-V are both getting scalable vector compute support. Could lead to GPU-like compute capabilities without all the goofiness of GPUs.

The H.266 / VVC video compression standard will be finalized in a few months. Ignoring licensing issues (yes patents blah blah blah) industry-wide efficiency wins like that are always nice.

Generative machine learning (think GANs for images or GPT-2 for text) can be applied to video games. Truly unique narrative experiences!

Everything remote work-related. I previously thought my career would miss the WFH revolution and most knowledge workers would still go to the office until at least 2050, but now it seems clear that is going to get dramatically accelerated.

– Far UVC lights (200 to ~222nm) such as Ushio’s Care222 tech. This light destroys pathogens quickly while not seeming to damage human skin or eyes.

– FPGAs. I’m no computer engineer, but it seems like this tech is going to soon drastically increase our compute.

– Augur, among other prediction platforms. Beliefs will pay rent.

– Web Assembly, as noted elsewhere. One use case I haven’t read yet here is distributed computing. BOINC via WASM could facilitate dozens more users to join the network.

– Decision-making software, particularly that which leverages random variable inputs and uses Monte Carlo methods, and helps elicit the most accurate predictions and preferences of the user.

I’m an FPGA engineer and I doubt they will go mainstream. They work great for prototyping, low-volume production, or products that need flexibility in features, but they are hard to use (unlikely to get better in my opinion) and it’s hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

That said, they are very cool! And learning to create FPGA designs teaches you a lot about how processors and other low level stuff works.

>it’s hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

I see them going mainstream when brain computer interfaces go mainstream (prob a long way away) since a lot of it (in my experience working in a couple of labs and some related hardware) depends on processing a lot of the data from the sensors, of which most is thrown away due to the sheer volume, and transferring it back and being able to update the filtration matrices easily tailored to sampled data.

One thing that is changing quickly: deep learning, particularly inference on the edge. FPGAs are more versatile than ASICs.

This just provides a cost advantage though right? I mean that’s great, love me some margin, but it’s not really a new frontier. Unless I’m wrong?

> Far UVC lights (200 to ~222nm)

OK, these are not safe wavelengths, and whatever you’re reading is not right. This is absolutely ionizing radiation. The rate of formation of thymine dimers in this regime is similar to that around 260 nm. That is, it causes DNA damage. Please see Figure 8 below:

https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1751-1097….

The logic of the claim that you can destroy a pathogen with UV but not cause damage to human tissues is incongruous. If it kills the pathogen, it also causes radiation damage to human tissues as well. One cannot dissociate these because they are caused by the same photoionization mechanism.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5552051/

> We have previously shown that 207-nm ultraviolet (UV) light has similar antimicrobial properties as typical germicidal UV light (254 nm), but without inducing mammalian skin damage. The biophysical rationale is based on the limited penetration distance of 207-nm light in biological samples (e.g. stratum corneum) compared with that of 254-nm light. Here we extended our previous studies to 222-nm light and tested the hypothesis that there exists a narrow wavelength window in the far-UVC region, from around 200–222 nm, which is significantly harmful to bacteria, but without damaging cells in tissues.

> As predicted by biophysical considerations and in agreement with our previous findings, far-UVC light in the range of 200–222 nm kills bacteria efficiently regardless of their drug-resistant proficiency, but without the skin damaging effects associated with conventional germicidal UV exposure.

Gaze tracking. I’ve used the dedicated gaze tracking sensors from Tobii and it’s really natural and responsive. I think we’re going to see a lot of touchless interaction become popular in the post-covid world.

A friend of mine is working on coscout. It’s in beta right now, but he showed me some pretty insane machine learning based insights for companies, investors and founders.

Things like

– When will this company raise the next round?
– What is the net worth of <literally anyone>?
– What is the probability that this investor will invest in you? (given your sector, founder age, pedigree, gender, market conditions, do you have an MVP or not etc.)
– A bunch of other complicated stuff I didn’t really understand

Definitely worth keeping an eye on if you’re into this kinda stuff: https://coscout.com/

https://luna-lang.org – a dual-representation (visual & textual) programming language

RISC-V

Zig programming language

Nim programming language

(also some stuff mentioned by others, like WASM, Rust, Nix/NixOS)

> luna-lang

Whoa… had to do a double take there.

Great to see luna seems to be alive yet again – now “enso lang” per github [i]. A git commit just days ago… so here’s hoping! It is such a great concept.

[i] https://github.com/luna/ide

I am sure I wouldn’t use a service that can literally get into each and every device of my private network if they want to or worse, they get hacked. Each and every device in the network automatically accepts whatever public keys and endpoints that get advertised by their servers and automatically connect to them. It’s not only an overpriced mediocre product. From a security perspective, it’s the most dangerous SaaS service I’ve ever seen.

My biggest fear is once this company gets tied to WireGuard and the security disasters come out, WireGuard’s fate will be tied to a mediocre commercial product that put money above engineering decisions.

I wrote a guide on connecting Hasura + Forest admin for no-code SaaS apps + admin backends:

“14. Connect Forest Admin to Hasura & Postgres”

http://hasura-forest-admin.surge.sh/#/?id=_14-connect-forest…

For Heroku specifically you need to make sure that the client attempting to connect does it over SSL, so set SSL mode if possible (many clients will do this by-default).

To get the connection string for pasting into Forest Admin config, run this:

    heroku config | grep HEROKU_POSTGRESQL

That should give you a connection string you can copy + paste to access externally from Heroku:

    HEROKU_POSTGRESQL_YELLOW_URL: postgres://user3123:passkja83kd8@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398

https://devcenter.heroku.com/articles/heroku-postgresql#exte…

Awesome, will definitely. I think it was your post in a different thread earlier this year where I came across it originally. I remembered the name as you helped me on the Hasura Discord (Thank you for all your awesome input there) & it looks so promising.

Ginkgo bioworks, Oxidize, WASM (& Deno), Serverless via Isolates (e.g. Cloudflare workers), Neurolink, OpenAI

I was excited for memristors in 2008 when HP announced they were right around the corner. They even built a new computer and a new OS to take advantage of memristors [1]. And then it never happened and no one has ever built one and it’s pretty much vaporware. I would be hesitant to trust anyone who says they’re anywhere close. It’s just not a technology that actually exists.

[1] https://www.extremetech.com/extreme/207897-hp-kills-the-mach…

> They even built a new computer and a new OS to take advantage of memristors

I could be wrong but I think I remember reading somewhere they ran into patent infringement issues that they couldn’t get around or something like that.

VR. It seems just about ready, but still a little too expensive.

While good games are obviously already there, I’m more curious about work. Would an infinite desktop with an appropriate interface beat the reliable old 2×24″ screen setup I have? I think it could.

Not just for games. Imagine VR tourism, meetings… or even escapism. Especially these days. Last time I used one the graphics isn’t as immersive yet. I don’t mind tethering it to a powerful computer. But image quality is a must.

> While good games are obviously already there, I’m more curious about work.

Good games are most definitely not there. The consensus is that Alyx is really the only worthwhile VR title. Just about everything else is gimmicky and trite. VR still has a long way to go.

I guess I’m much more conservative than other folks, but I think we’ve scratched only 10% of the surface of the benefits that things like Kubernetes, Consul, Vault and Terraform should/will provide.

So they’re on the list. I feel like at my job I’m pushing at the edges (as far as running large scale, stable production) and we’ve still got miles left.

Also Bazel.

I guess this is a boring answer.

Is there anything like “terraform provider for bare metal”? Would be soo convenient to just go from full nuke and pave to functional dev machine with a single config repo.

Zig Programming language.

Because it’s basically C-+, It’s extremely easy to use, and also extremely easy to parse, so (and maybe this is self-serving because I did it) I see a lot of automated code-generation tools to hook Zig into other PLs.

C’s age is starting to show (and it’s becoming a problem), and I think it has the opportunity to be sort of a place for old C heads like me to go to stay relevant, modern, safe, and productive.

Roam Research
https://roamresearch.com/

A tool for networked thought that has been an effective “Second Brain” for me.

I’m writing way more than ever through daily notes and the bi-directly linking of notes enables me to build smarter connections between notes and structure my thoughts in a way that helps me take more action and build stronger ideas over time.

I’ve had good experiences with personal Wikis before, but have fallen back to plain notes. I think notetaking by itself is immensely powerful and underappreciated in general (wish I had started earlier), and all that’s necessary is building a habit out of it. Maybe this can give it a little extra spice (hopefully not as cumbersome as a full blown personal website).

Especially now. Ive been using HN for years and before covid I could see all the posts from the day before in 20 mins. Now it will take 2-3 hours. Everyone is sharing.

I’m kinda sad for you – I’ve been using and advocating zerotier for a while (it’s amazing and indispensable)…but in my circles the word ‘wireguard’ has got people excited, which (anecdotally) is benefitting tailscale and generating more hype around them than zerotier ever got. Hopefully a rising tide will lift all ships and you find a way to capitalise on it 🙂

(I prefer device-based zerotier-style access rather than login-based tailscale-style so that does sway me to zerotier…but I have to admit tailscale looks more polished, e.g. the screenshot of seeing other devices on the network. I get it’s not a fundamental feature! But I can’t help but appreciate it)

We are doing fine and V2 is coming soon with a ton of improvements. I just have to occasionally point out our existence again.

The pulldown showing other devices on a network does look spiffy but that wont scale. We have users with thousands of devices on a virtual LAN and the protocol will scale far larger. Not only will that not fit in a menu but any system that relies on a master list will fall down. That list and refreshing it will get huge.

We are doing the tech first, spiff second.

Some feedback. I think I stumbled upon Zerotier a while back and didn’t really get what it is. IIRC it felt like something that is only useful for big companies, exactly what I felt today.

I think the website could do a better job showcasing how it’s used.

Hope my feedback is helpful and wish all the best!

Combining statistics-based AI with GOFAI to create systems that can both recognize objects in a sea of data and reason about what they saw.

The MiSTer FPGA-based hardware platform.

RISC-V is gonna change everything. Yeah, RISC-V is good.

I’d love to try it if they didn’t tie the language to their hosting service. I understand the necessity of the coupling but until someone can start a competing hosting company with the same language, it’s not something I’m interested in.

A while back I installed ServerPilot which automatically sets up Nginx/Apache/PHP/MySQL for you. It also handles security updates. This made those $5 VPS’ so much more appealing [1] as I could install lots of small Node.js apps on a single server, and avoid managed hosting providers who seem to prefer charging per app instance.

Anyway ServerPilot then scrapped their free plan so I’ve been looking for an alternative. cloudron looks cool, I don’t see anything specific to Node.js/Express, but it does have a LAMP stack which includes Apache, so I might try that. Otherwise I’ll probably use something like CapRover [2], a self-hosted platform as a service.

[1] https://twitter.com/umaar/status/1256155563748139009

[2] https://caprover.com/

And SSL is a cinch! I have been very happy with Dokku, I’m surprised I don’t see it mentioned around here more often.

Would love to get your opinion as I’m building a competing product to ServerPilot in this space. Is the $5 too expensive for the service? or is it just too expensive because the billing increases as you have more servers under management, and they charge you per app as well?

Are there features ServerPilot is missing that would justify the price more for you? Some examples might be monitoring, analytics, automated security patching, containerization of workloads, etc.

Would the plan be more appealing if the cost of the plan, the portal, and the VM hosting itself were all rolled into one? (i.e. you would just pay one company, rather than having to sign up for DO as well as ServerPilot).

1) Independence of hosting provider is a must. Don’t want to be forced to use your VPS service when I have all my infrastructure already on Linode, DO, Vultr, etc.

2) Should be free when used in non-commercial applications. Multiple servers included.

3) Keep the common and already available typical configurations free: lamp, lemp, python, letsencrypt, email. Charge for things which no other panel free or otherwise typically supports. lightspeed, go, caddy, load balancing, sql replication, graphql, etc. Thats value.

“Self-hosting apps is time consuming and error-prone. Keeping your system up-to-date and secure is a full-time job. Cloudron lets you focus on using the apps and not worry about system administration.”

neat, don’t think I’ve seen something like this before!

It kind of just looks like a simplified version of CPanel which has been on every VPS for the last 20+ years.

They’d be so much more successful, if “Install” button did not have this:

  wget https://cloudron.io/cloudron-setup
  chmod +x ./cloudron-setup
  ./cloudron-setup --provider [digitalocean,ec2,generic,ovh,...]

Which, as a regular user, I don’t understand when I see it.

Hell, I am a dev, and I still did not know that will let me create one quickly.

It is certainly less secure, than just calling the API of those cloud providers directly from the site backend.

Low-level stuff & Linux: RISC-V, Vulkan, Wayland, Sway WM, Wireguard, Zig.

Web or high-level stuff: deno, Svelte, Vue, Webassembly, WebGPU, Flutter.

I had a blast writing a graph editor in Svelte but it was hard to debug. That was right after 3.0 came out, if it’s easier to debug now then I would love to build with it

Looks like the ScyllaDB playbook i.e. rewrite a popular Java app in C++ and sell it as a much faster product.

Going to be interesting to see if they survive as the pace of JVM improvements has been rapidly increasing in the last year or so.

thanks, though what we sell is operational simplicity. speed is nice, but not the main benefit. a single binary that’s easy to run is what CIO seem to be interested. though we are young. fingers crossed it works 🙂

Caddy, specifically v2 (https://caddyserver.com/v2)

I’ve been using Caddy v2 all through beta/RC and glad it’s finally stable with a proper release. I moved away from using nginx for serving static content for my side projects and prototypes. I’m also starting to replace reverse proxying from HAProxy as well. The lightweight config and the automated TLS with Let’s Encrypt makes everything a breeze. Definitely has become part of my day-to-day.

Dot Net Core (C#/Winforms) compiling to native code.

ReactOS/Wine. Lately I’m getting worried about where the Windows OS seems to be headed. ReacOS slowly catching up, but recent developments seem to be promising. There’s still many things I need that are not multi/cross-platform.

Cloudflare workers. It was on my watch list at the beginning of the year and I’m just about to out a 20k page “static” (with tons of interactivity) site into production them.

Using it is an API gateway and Kv store for truly static assets is amazing.

Ubuntu, ParrotOS, Kali

Julia Lang is fun

For devops, Pulumi/CDK

I watch graph dbs but they all suck or are too expensive or have insane licenses (Neo4j, RedisGraph)

Differentiable programming, Zygote, Jax, PyTorch, TF/Keras

Optimal transport (sliced fused gromov-wasserstein)

AGI, levin, solomonoff, hutter, schmidhuber, friston

Ramda is amazing

George Church’s publications

im also super interested in molecular programming

DEAP is fun for tree GP a la Koza

Judea Pearl’s work will change the world once folks grok it

Secure multiparty computation

NVME over Fabric.

Only started to become available last year in AWS’ more expensive instance types. But hoping it will become more widespread.

Benchmarks with Spark result in real world performance improvements of 2-3x and SSDs will be much faster with PCIe4.0.

Sure but the Elastic Fabric Adapter is only on the top tier of instance types.

Hoping it trickles down for us normal people.

Oh, I thought you were talking about how EBS disks are presented as NVMe even though they run over the EC2 network fabric.

the reason these things work is because they use a datacenter in your city. Thats why low latency. You’d have to have doctor in the same locality, which is not what I think you are thinking.

I think 3d printing still had enormous potential.

They are printing jet engine parts with it these days.

when can I print a hamburger? that’s when I’ll know we have made it to the future!

Programming languages count as technology, right?

I’m really excited for what Julia is doing – making a really nice ecosystem of modern, dynamic, high performance code.

My list would be:

1. Swift – you can mix functional and object oriented code in a way I’ve not seen anywhere else. It’s also going to be multi platform including windows in the next version and it’s making inroads with Swift-Tensorflow. I can see a lot of really cool things coming from this once it’s multi platform.

2. Jai Language by Jonathan Blow. I’m not sure when it will come out but what’s been shown looks promising, a game specific language could cause other innovations that could later carry over to other languages.

3. Next Gen Consoles. X-box Series X and PS5 are both doing some cool things with memory management, SSD’s and GPU’s. Many of these innovations will make it to PC’s later.

4. New Email features (superhuman and HEY – by DHH) It seems like innovation is finally actually happening in this space.

5. Game Engine Innovations. Hammer 2 has some really cool UI for level design, Unreal 5 has some great lighting and handling of 3D, ID tech is using 3D decals to cool effect while not being expensive. A lot of the technology happening in games will spill over into other areas, Unity is doing stuff with the automative industry, Unreal with Architecture.

6. AI in use of Making Art. A good example is Unity’s new artengine (artomatix). https://artomatix.com/

7. Generative Design for engineering.

8. Dreams on ps4 – How quickly people can make something in this is amazing, if it ended up on PC or VR it could change everything.

9. AR as a tool for creators more than as a tool for consumers. 3D interactive CAD like Iron Man is more exciting than a game that makes you dizzy.

Wasmer is a project I’m watching closely for many reasons. I feel as WASM becomes more commonplace the role Wasmer plays will become clearer.

zk-S[NT]ARKs and related zero knowledge tech along with scalability improvements to Ethereum is poised to revolutionize financial tech. It will take decades to play out much like consumer PC and internet and we are still in very early stages.

But when it really picks up the impact will be as big as when Dutch invented modern finance in early 1600s.

Rocket propulsion tech. Not just because of SpaceX, but I really hope we develop newer and more efficient propulsion techniques.

I’m definitely excited to see how the VASIMR tech being developed by Ad Astra pans out, and whether anybody manages to build a functional nuclear thermal rocket. Hopefully the new super-heavy-lift capacity that’s expected in the coming decade will help to enable the groups working on those and other designs.

NextDNS (nextdns.io) is a genius idea that I very much wish I had thought of. I am a paying customer as of this past week and am integrating it in all the places I always meant to put a pihole …

We have had AI-assisted code completion for a long time now. It used an obscure and esoteric form of Symbolic AI better known as type systems.

Not sure if this counts, but I look forward to seeing the future of meat alternatives – impossiblefoods.com, beyondmeat.com, eatnuggs.com, etc.

Deep Learning driven NLP. We’ve seen massive advancements in research, but from personal experience working with a few startups, these new forms of NLP are just beginning to hit the market now (most companies are still using Classical NLP techniques like part of speech tagging etc). It’s a huge space and I can’t wait to see its use cases expand.

Brain-Computer Interfaces.

Augmented Reality. As someone in this thread mentioned for self-driving cars, I think the hype cycle for AR is in the right spot for us to begin seeing real advancements in the next couple years, especially with Apple’s recent announcement.

Absolutely. I enjoy Bubble – should have put it on the list.

Any other platforms that have caught your attention?

Unikernels – seems promising and yet ecosystem is not there. I think it’s matter of time.

+1 for Hasura. It’s such a pleasure to use. Having a configurable backend with so much fine grained Authorization. It’s just awesome. It literally cuts your project time in half.

Have you given it a try yet? I LOVE TypeScript and think the concept is really cool, but the compatibility story for NPM packages needs to be fleshed out somehow. Otherwise I fear it will fall into the same fate as Python 2 to 3.

1. Functional strictly typed programming patterns. It’s hard to say if functional languages themselves will get adoption, but we definitely see functional patterns being used more and more in languages like Javascript, and being pushed in things like React/Redux.

2. Graphql/service-based architectures

There are two subcommunities in the world of functional strictly typed programming languages. The Haskell camp is where you get the mind bending jargon and ivory tower ideas. Luckily, there is another camp that eschews this head-in-the-clouds thinking and sticks to practical matters first and foremost. You’ll want to look for the ML family of languages: SML, Rust, Ocaml, F#, and Scala (my favorite of them all, but for some reason some people are trying to turn it into Haskell on the JVM).

Leave a Reply

Your email address will not be published. Required fields are marked *