

A well-known metallurgical behavior was found at the surface of the module, which will be fixed by the end of the third quarter of 2026.
Holy shit. This is “chatgpt rewrite this less negatively without mentioning rust” levels of corporate speech.
Former account: @Redjard@lemmy.dbzer0.com
Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key)


A well-known metallurgical behavior was found at the surface of the module, which will be fixed by the end of the third quarter of 2026.
Holy shit. This is “chatgpt rewrite this less negatively without mentioning rust” levels of corporate speech.


Headline sounds like they want them to relocate to Europe. Bummer.


The reason is that before all this only 20% of oil passed the strait and 8% is being redirected through pipelines built long ago anticipating this.
12% of supply decrease will double prices, because turns out you don’t have much choice when you design everything to depend on oil, but eventually everyone figures out to use 12% less.


Females
jucky.
Also probably a function of the intended audience. Only happens when content farming stuff that hits a certain audience, and then it’s equally done by all creators. If you’re not an attractive woman you just put someone else in the thumbnail, like someone you interviewed, or if it’s game-related a character.


Don’t do it often enough to remember which is better and which is worse. The first search result isn’t garbage enough to bother with something else.


In those cases it’s less painfull to use a website to extract the transcript and read that.
You can skim around text way easier than a video.
TLDR: ddr ram refreshes itself, making cpus freeze sometimes when reading ram. High speed traders don’t want that so they figure out ways to make data live with two copies on two different portions of ram that freeze at different times. This is impractical for normal programs. Most of the effort is spent on working around multiple abstraction layers, where the os and then the ram itself changes where specifically data goes.
Every 3.9 microsconds, your RAM goes blind. Your RAM physically has to shut down to recharge.
This lockout is defined by the Jedex spec as TRFC or refresh cycle time. Now, a regular read on DDR5 might take you like 80 nanoseconds. But if you happen to accidentally get caught by this lockout, that’s going to bump you up to about 400 nanoseconds.
Think for a second. What industry might really care about wasting a couple hundred nanconds where one incorrectly timed stall would cost you millions of dollars? That’s right, the world of highfrequency trading.
[custom benchmark program on ddr4 ram and 2.65GHz cpu:] When you plot the gaps between the slow reads, they’re all the same, 7.82 microsconds [20,720 cycles] apart every single time. […] So, the question is, if this is so periodic, can we potentially predict when the refresh cycle is going to happen and then try to read around it?
See, it’s not like the whole stick of RAM gets locked when the refresh cycle happens. It’s a lot more granular than that. With DDR4, for example, the refresh happens at the rank level. And then DDR5 gets even more complicated where you can like subsection down even further than that.
The memory controller does what’s called opportunistic refresh scheduling, which basically means that it can postpone up to eight refreshes and then catch up later if we happen to be in a busy period. […] how the heck are you going to predict opportunistic refresh scheduling?
Then stuff about virtual memory management in modern OSs
And I take two copies of my data and I space them nicely 128 bytes apart. And I’m feeling pretty good about myself, but for all I know, it could be straddling a page boundary and then the OS could have decided to put them wherever it felt like putting them.
physical ram address issues:
So the exor [XOR?] hashing phase kind of acts like a load balancer baked like directly into the silicon itself. Takes in your physical address, does a little bit of scrambling, and tries to spread it out evenly across all of the banks and channels.
This also helps with rowhammer attacks where writing close to a physical address lets you write to that other address.
So, DRAM [XOR] hashing strategies were already not documented publicly. But then after the entire rowhammer thing, obviously, there was even less incentive to publish these load balancing math strategies publicly.
If AMD and Intel documented this kind of stuff, they’d kind of be like locking themselves into a strategy because customers would start to build against it. And then next year when it comes around, it’s really going to make your life difficult because you’re not going to be able to change things nearly as easily. But if you just don’t document it, well, who’s going to complain? only weirdos doing crazy stuff like me.
Inside of your CPU, right next to the memory controllers, there’s actually tiny little hardware counters, one for every channel. […] If we do a simple pseudo [sudo] modprobe AMD uncore, it reveals those hardware counters to the standard Linux Perf tool. […] If I write a tight loop of code that constantly flushes the cache and hammers one particular memory address, then that means one counter should start to light up. And theoretically, this should tell us exactly what channel that our data is living on.
Can’t really tell what’s going on here. Well, that, my friend, is OS noise. […] The problem is these counters are pretty dumb. So you can’t tell it only count the reads from this particular process. […] All we need to do is run it 50,000 times. […] See that spike? Super cool. And now I really know where my data lives.
So, to me, I don’t really care which channel I’m ending up on, whether that’s channel 3, channel 7, whatever, doesn’t matter to me. All I need to do is make sure I’m ending up on different channels. […] The mathematical answer is that XOR is linear over GF2 which is actually really simple. Basically that means that no matter what scrambling the memory controller does, flipping a base bit will always flip the output no matter how many things are chained together.
Goes on to write low latency benchmarks which show lower latency.
ubo has a button to disable javascript. For news pages that tends to be the easiest way to make them barely usable.


Hope noone does that to the bank notes.


What I see now is pages lossily compressing pngs “because webp can do lossless” instead of just handing me the png file they still have.
And there are still tons of issues with webapps out of my control not supporting it.


We have no magnetic monopoles, so at maximum this is a dipole field with inverse cube. Given they must be focusing in the field as much as possible, I’d expect it to drop off much faster than that.


Uh, you can’t just use a profile that doesn’t exist


I can though.If all the profiles are garbage it’s beyond saving anyway, a single outlier can be ignored.


The monitor sends you a list of accepted input formats. You can sanity check among the list for any outliers, without online information and without hardcoding limits.


I’d expect any current displayport port to handle very high refreshrates when the resolution is reduced correspondingly. The limit to my knowledge is in bitrate.
I’d also expect connector support to sit in the gpu driver.
A basic sanity-check might be the answer though. Still why not improve it instead of just increasing the number? You could check if the rate is an outlier or there are many profiles offered that climb up to that rate for example.


Not sure how far you wanna go. I know my way around mediawiki from the sysasmin side (installing, updating, installing extensions and themes, configuring weird features like ldap auth, …), the admin side (css, users and groups, templates and lua scripts), and some moderation (editing etiquette on wikipedia and a few other wikis, typical style guides, organization of pages and overview pages).
I’m quite busy lately, but you could ask me some questions via dm for example and I would be willing to do some small things.


If you measure response curves of individual cones and rods you won’t see any of the parameters go below the ms range, probably not even below 10ms. However the retina does receive bright short pulses as longer averaged signals. All the very high Hz vision cases see information of the same “object” spread over many cells in the retina. A trail showing up as many distinct images vs a long smear.
If you couldn’t move your eyes the limit would be lower, but because you can’t the rendering cannot anticipate those effects and emulate them. Motion blur is what happens when you always “anticipate” the eye to remain static. If you could measure eye movement extremely well and react within well under a ms, you might be able to match motion blur to eye movement of a single person. Add a second observer and it already breaks down. Not that our sensors are anywhere remotely near making this possible.
Edit: I suppose this would mean if you integrated a display into contact lenses and got the latency right you would max out at lower Hz.


Shouldn’t be enums as refresh rates can be floating-point and in practice there also is a lot of weirdness out there, like 59.94Hz.
The timing really needs to be matched to the monitor, you don’t want a 60Hz monitor using the resources of a 1000Hz monitor at any point. It should also be handled by the gpu and gpu driver more than the os.
I don’t think it’s that easy and I struggle to think of a legitimate reason. To me it seems more like an arbitrary bounds-check on monitor info received via hdmi/displayport. Bad coding for sure, but also potentially a point where people are pushed to newer more problematic versions of windows as the older ones “don’t support new hardware”.


Why was this ever a hardcoded limitation?
Most changes are updating the copyright year.
After that, it’s pretty much (or maybe completely, I haven’t checked exhaustively) for the --help and --version flags, not for the core part of exiting with a certain exit code.