

That’s not bad pricing wise. There’s very very little prosumer gear that’s multi gigabit and it’s all much higher price, or it’s just a PC with several NICs.
If and when we move to hyperfibre this is going to be pretty high up on the list.
That’s not bad pricing wise. There’s very very little prosumer gear that’s multi gigabit and it’s all much higher price, or it’s just a PC with several NICs.
If and when we move to hyperfibre this is going to be pretty high up on the list.
603 for maglevs, 574.8 for steel rail, set in France in 2007 by a hotted up, modified TGV.
China holds the record for a stock train at 487, set in 2010.
(all per Wikipedia)
It looks like the article might be implying that they will be the fastest trains operating in revenue service when they enter service, but that surely needs to be demonstrated with a production train in revenue service.
There’s a good chunk of the world where you don’t ever have to water lawn, except when initially seeding it.
Regular trains don’t run underground. Lots of opencast mines exist .
Basically all mines have an above ground terminal where whatever you mined is unloaded from your underground trains, lifts, haul trucks or whatever else onto storage piles, then loaded onto the actual long distance trains.
If the mine entry is up a mountain, then the trip down from that point will be a net energy producer regardless of anything else.
I wouldn’t be surprised if there are electrified railway lines doing the same. Regenerate large amounts of energy into the grid while descending loaded; consume a relatively small amount of energy to haul the empty train back uphill.
If you’re thinking of that CGI crane lifting concrete blocks, it’s unfortunately a really bad idea.
Pumped hydro stores energy by lifting weight uphill, instead. Water is basically the cheapest thing you can get per tonne, and is easy to contain and move.
To store useful amounts of energy using gravity, you need pretty large elevation differences and millions of tonnes of mass to move.
The NASA Vehicle Assembly Building is also a contender.
I’m not sure how many dividing walls there are inside Everett, but the VAB is basically one massive empty skyscraper.
I feel dumber having read that.
Banning a whole country because you disliked a company?
Dealing with stuff that’s ‘almost working’ is often harder than starting from scratch; ask any tradesperson.
They also apparently cannot get their heads around the fact that people might give you a discount if you advertise their brand. Ad-supported pricing has been around for a long time; it’s not some voodoo.
Until the day comes that I get a letter in the mail from the government saying, “Here’s how much you paid in taxes, if you’re cool with that then please disregard”, I will not be satisfied.
NZ does that. More accurately, they email you to tell you that there’s a letter available online - I don’t think they send physical mail by default.
Then they pay any refund straight into your nominated bank account.
“Lossless” isn’t the term you want; that refers to not lossily compressing the main data. Lossless compression or storage of media is very rare outside of text and sometimes audio, because it ends up so large.
You want to preserve metadata. That applies regardless of how lossy the data compression is.
Any hard drive can fail at any time with or without warning. Worrying too much about individual drive families’ reliability isn’t worth it if you’re dealing with few drives. Worry instead about backups and recovery plans in case it does happen.
Bigger drives have significantly lower power usage per TB, and cost per TB is lowest around 12-16TB. Bigger drives also lets you fit more storage in a given box. Drives 12TB and up are all currently helium filled which run significantly cooler.
Two preferred options in the data hoarder communities are shucking (external drives are cheaper than internal, so remove the case) and buying refurb or grey market drives from vendors like Server Supply or Water Panther. In both cases, the savings are usually big enough that you can simply buy an extra drive to make up for any loss of warranty.
Under US$15/TB is typically a ‘good’ price.
For media serving and deep storage, HDDs are still fine and cheap. For general file storage, consider SSDs to improve IOPS.
I don’t remember if they fully closed the loopholes, but there are inputs that programs cannot catch unless you actually replace the OS.
Here in NZ they do a factory reset on your calculator at the start of every exam.
Even 95% is on the low side. Most residential-grade PV grid-tie inverters are listed as something like 97.5%. Higher voltage versions tend to do better.
Yeah, filters essentially store power during one part of the cycle and release it during another. Net power lost is fairly minimal, though not zero. DC needs filtering too: all those switchmode power supplies are very choppy.
B key vs M key. Laptop likely needs a SATA M.2 using B or B+M keying, you have a PCIe x4 drive with M keying.
I’m not sure there are any power grids past the tens-of-megawatt range that aren’t just a 2/3/4 terminal HVDC link.
Railway DC supplies usually just have fat rectifiers and transformers from the AC mains to supply fault current/clearing and stability.
Ships are where I would expect to start seeing them arrive, or aircraft.
Almost all land-based standalone DC networks (again, not few-terminal HVDC links) are heavily battery backed and run at battery voltage - that’s not practical once you leave one property.
I’m sure there are some pretty detailed reports and simulations, though. A reduction in cost of multi-kV converters and DC circuit breakers is essential.
PV inverters often have around 1-2% losses. This is not very significant. You also need to convert the voltage anyway because PV output voltage varies with light level.
Buck/boost converters work by converting the DC current to (messy) AC, then back to DC. If you want an isolating converter (necessary for most applications for safety reasons) that converter needs to handle the full power. If it’s non isolating, then it’s proportional to the voltage step.
Frequency provides a somewhat convenient method for all parties to know whether the grid is over- or under- supplied on a sub-second basis. Operating solely on voltage is more prone to oscillation and requires compensation for voltage drop, plus the information is typically lost at buck/boost sites. A DC grid would likely require much more robust and faster real-time comms.
The AC grid relies on significant (>10x overcurrent) short-term (<5s) overload capability. Inrush and motor starting requires small/short overloads (though still significant). Faults are detected and cleared primarily through the excess current drawn. Fuses/breakers in series will all see the same current from the same fault, but we want only the device closest to the fault to operate to minimise disruption. That’s achieved (called discrimination, coordination, or selectivity) by having each device take progressively more time to trip on a fault of a given size, and progressively higher fault current so that the devices upstream still rapidly detect a fault.
RCDs/GFCIs don’t coordinate well because there isn’t enough room between the smallest fault required to be detected and the maximum disconnection time to fit increasingly less sensitive devices.
Generators are perfectly able to provide this extra fault current through short term temperature rise and inertia. Inverters cannot provide 5-fold overcurrent without being significantly oversized. We even install synchronous condensers (a generator without any actual energy source) in areas far from actual generators to provide local inertia.
AC arcs inherently self-extinguish in most cases. DC arcs do not.
This means that breakers and expulsion type fuses have to be significantly, significantly larger and more expensive. It also means more protection is needed against arcs caused by poor connection, cable clashes, and insulation damage.
Solid state breakers alleviate this somewhat, but it’s going to take 20+ years to improve cost, size, and power loss to acceptable levels.
I expect that any ‘next generation’ system is likely to demand a step increase in safety, not merely matching the existing performance. I suspect that’s going to require a 100% coverage fibre comms network parallel to the power conductors, and in accessible areas possibly fully screened cable and isolated supply.
EVs and PV arrays get away with DC networks because they’re willing to shut down the whole system in the event of a fault. You don’t want a whole neighborhood to go dark because your neighbour’s cat gnawed on a laptop charger.
There should be no need for tuning, tweaking, or optimizing on functionality this basic.
If you ask the processor, it will spit out a graph like this telling you what threads/cores share resources, all the way up to (on large or server platforms) some RAM or PCIe slots being closer to certain groups of cores.
Acorn/ARM apparently did much the same thing.
Are you talking about China or the US there?