

deleted by creator
deleted by creator
IDK, it wouldn’t be the first time a news org published some random shit as fact because they’re too eager to be the first to report on something.
Nah,
If I walk up to you on the street and tell you to hand over your money or I’ll kill you, that’s enough to land me jail. Its maybe even enough for you to be justified in punching me in self defense, if you feared for your life and there was no other way you could ensure your safety.
But suddenly if I say I want to put a million people in a gas chamber that’s A-OK? Suddenly no one can punch back or else they’re “just as bad”? Suddenly the lines are super blurry and the slopes are super slippery and its absolutely impossible to tell what a threat of violence is.
Its a crime to say you’ll kill one person, its your right to say you’ll kill a million.
In theory having a database of configuration settings isn’t a horrible idea.
But the execution was terrible.
Honestly your situation is kind of a worst case scenario.
At this point Linux works really well if all you want to do is browse the web and play (single player) games.
It also works pretty well if you’re an expert who understands the system in and out and can comfortably edit any config file on their drive to achieve what they want.
But if you’re a Windows power user whose used to being able to set up all kinds of niche functionality its a rough experience when all of your knowledge is now suddenly useless and there’s a different set of things that are easy or hard to do.
Its actually kind of a similar experience going the other way. For example there are some things that Linux users are used to being able to script that can’t really be accomplished on Windows except via autohotkey, which from a Linux user’s perspective just seems incredibly dumb.
I haven’t had any problems on Linux Mint with a 3060 Ti aside from some artifacting when I try to do screen recordings (unless I disable flipping).
EDIT: I’ve had that GPU for about 2 years. I had a 1050 Ti for about 4 years before that.
Actually now that I think about it an update did break my graphics at one point, but that might’ve been partially my fault. I just reverted and reinstalled the same update right after though, and that worked just fine, so it wasn’t a huge deal.
Overall I would say its been more than 10 years since I’ve had an actual major graphics issue (having to open xorg.conf).
So, keep in mind that single photon sensors have been around for awhile, in the form of avalanche photodiodes and photomultiplier tubes. And avalanche photodiodes are pretty commonly used in LiDAR systems already.
The ones talked about in the article I linked collect about 50 points per square meter at a horizontal resolution of about 23 cm. Obviously that’s way worse than what’s presented in the phys.org article, but that’s also measuring from 3km away while covering an area of 700 square km per hour (because these systems are used for wide area terrain scanning from airplanes). With the way LiDAR works the system in the phys.org article could be scanning with a very narrow beam to get way more datapoints per square meter.
Now, this doesn’t mean that the system is useless crap or whatever. It could be that the superconducting nanowire sensor they’re using lets them measure the arrival time much more precisely than normal LiDAR systems, which would give them much better depth resolution. Or it could be that the sensor has much less noise (false photon detections) than the commonly used avalanche diodes. I didn’t read the actual paper, and honestly I don’t know enough about LiDAR and photon detectors to really be able to compare those stats.
But I do know enough to say that the range and single-photon capability of this system aren’t really the special parts of it, if it’s special at all.
https://en.wikipedia.org/wiki/Tony's_Chocolonely
Dutch chocolate which is very good, and uses a slavery-free supply chain.
I think there’s a sort of perfect storm that can happen. Suppose there are two types of YouTube users (I think there are other types too, but for the sake of this discussion we’ll just consider these two groups):
Type A watches a lot of niche content of which there’s not a lot on YouTube. The channels they’re subscribed to might only upload once a month to once a year or less.
Type B tends to watch one kind of content, of which there’s hundreds of hours of it from hundreds of different channels. And they tend to watch a lot of it.
If a person from group A happens to click on a video that people from group B tend to watch that person’s homepage will then be flooded with more of that type of video, blocking out all of the stuff they’d normally be interested in.
IMO YouTube’s algorithm has vacillated wildly over the years in terms of quality. At one point in time if you were a type A user it didn’t know what to do with you at all, and your homepage would consist exclusively of live streams with 3 viewers and family guy funny moments compilation #39.
At to end of the day it comes down to this:
Is it cheaper to store steel stock in a warehouse or terrawatt-hours of electricity in a battery farm?
Is it cheaper to perform maintainance on 2 or 3x the number of smelters or is it cheaper to maintain millions of battery or pumped hydro facilities?
I’m sure production companies would love it if governments or electrical companies bore the costs of evening out fluctuations in production, just like I’m sure farmers would love it if money got teleported into their bank account for free and they never had to worry about growing seasons. But I’m not sure that’s the best situation for society as a whole.
EDIT: I guess there’s a third factor which is transmission. We could build transmission cables between the northern and southern hemispheres. So, is it cheaper to build and maintain enormous HVDC (or even superconducting) cables than it is to do either of the two things above? And how do governments feel about being made so dependent on each other?
We can do a combination of all three of course, picking and choosing the optimal strategy for each situation, but like I said above I tend to think that one of those strategies will be disproportionately favorable over the others.
The commercial version of practically everything is better than the consumer version (or at least bullshit-free).
The reason being that a large company has negotiating power far beyond that of an individual consumer.
Specifically they are completely incapable of unifying information into a self consistent model.
To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can’t be sure what that shape is. An LLM sees a shadow and its idea of what’s casting it is as fuzzy and mutable as the shadow itself.
Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.
With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.
That’s less the Unix way and more the BeOS way.
That sounds absolutely fine to me.
Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.
In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.
If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.
That was a different technique, using simulated evolution in an FPGA.
An algorithm would create a series of random circuit designs, program the FPGA with them, then evaluate how well each one accomplished a task. It would then take the best design, create a series of random variations on it, and select the best one. Rinse and repeat until the circuit is really good at performing the task.
A dark pattern would be some sort of underhanded but legal tactic to trick or coerce a user into agreeing to something they wouldn’t otherwise.
But most websites aren’t using dark patterns for this, instead they just blatantly and plainly violate the law.
Doesn’t really count if you have to google it first to know what it is
Maybe you have to Google it
Besides the basics (operating systems, compilers, office, CAD, database, etc software):
A copy of open street map together with the linked Wikipedia articles, along with the software to view and edit them. I know you said no wikipedia, (since that’s pretty much a given), but this is basically the hitchhiker’s guide to the galaxy.
A copy of Godot’s editor so people can still make games.
As many games as I could fit in the remaining space, concentrating on the ones that give you the most bang for your buck in terms of space.