





well see when you’re too lazy to design a schema and just want to throw broken data into a black hole where you may or may not be able to retrieve it and deal with the repercussions in production - or better yet let the ops team handle it at 3am - then that’s when you’d choose mongodb


saying Microsoft requires that you go out and obtain a signed certificate that proves your identity as a developer
clearly that’s not the case if this was exploitable… again, N++ has an auto update mechanism that they current use. if they used a microsoft signing key to sign a builds hash, this hijack would not be possible
thus they have an update mechanism that works around microsoft signing… how is irrelevant. that is the current state of the software
The update mechanism was successful hijacked because integrity checks and authentication checks were not properly in place
that part we definitely agree on
Notepad++ even said that they moved hosting providers after this happened to them
side note: doesn’t remotely solve the problem… software updates should be immune to this to start with. it’s a problem that the hosting provider was compromised, but honestly we’re talking about a state sponsored hack targeting other states: almost no hosting provider would include this in their risk assessment, let alone shared hosting providers
Can you point out an existing open source application that runs on Windows that only uses GPG signatures?
again, that’s irrelevant… the concept that we’re talking about isn’t even specific to GPG. signing a hash using a private key is basic crypto, and GPG is a specific out of the box implementation
if we remove microsoft signing as an option for whatever reason (which we have) then it’s still very possible, and very easy to implement signed updates into your own custom update mechanism


yes but as you yourself said
I think they want to, but Microsoft has made it expensive for open source developers who do this as a hobby and not as a job to sign their software. I know not too long ago, this particular dev was asking its users to install a root certificate on their PC so that they wouldn’t have to deal with Microsofts method of signing software, but that kind of backfired on them.
the part that we’re arguing against isn’t that a microsoft signing key would have fixed the problem, it’s
No, because you wouldn’t be able to execute the updated exe without a valid signature. You would essentially brick the install with that method, and probably upset Microsoft’s security software in the process.
this update mechanism already exists: it’s the reason the hijack was possible. whatever the technical process behind the scenes is irrelevant… that is how it currently works; it’s not a “what if”
adding signing into that existing process without any 3rd party involvement is both free, and very very easy
which is why this is a solved (for free) problem on linux


Windows and MacOS do not use that method to verify the authenticity of developer’s certificates.
completely irrelevant… software authenticity doesn’t have to be provided by your OS… this is an update mechanism that’s built into the software itself. a GPG signature like this would have prevented the hack
The update mechanism works fine, but you will not be able to execute the binary on a Windows or MacOS system
that’s what we’re saying: this update mechanism already exists, and seems to install unsigned software. that’s the entire point of this hack… the technical how it works is irrelevant


there are more ways to do signing than paying microsoft boat loads of money… just check a gpg sig file ffs (probably using detached signatures: again, it’s already built into existing tools and it’s a well-known, easily solved problem)
what’s irrelevant is the argument about how the auto update mechanism would work because it already exists


that’s all completely irrelevant…, there is already an update mechanism built into NPP: that’s the entire point of the attack… it’s this update mechanism that got hijacked


You can do all that with a CSS variable though…
and then people have to learn what it all means, where those variables are, how your mess of custom CSS hangs together, and probably what overrides what in your hierarchy
you end up with this soup of classes on every single element
it’s either than or a soup of stuff in CSS. the difference is largely academic in modern HTML because it’s all contained in components anyway
they have to be as short as possible, and so they can’t use
font-sizeandfont-weight.
they don’t have to be; they could easily use font size and font weight, but i much much prefer the -lg notation… it makes your flow so much quicker. it reduces cognitive load significantly
I still suspect you’re better off just using the effort you would need to learn the tailwind classes to instead learn plain flexbox.
i know flexbox and grid plenty well, and similar applies across the board for things like tailwind: containing everything together so that you don’t have to mess around switching between different places to define things, and using classes that kinda just represent what you want in shorthand literally makes my frontend development literally 10x quicker, and just feel smoother… even when i’m just doing personal projects
you don’t have to believe me; that’s fine… but i used to think similarly to you, had a couple of failed attempts and hated tailwind, and my most recent personal projects it just clicked and everything feels so nice. i’m a principal engineer, and have done plenty of work on all kinds of projects so it’s not like i’m inexperienced and just go with the latest fad. these small time savings really add up


i’m not a frontend engineer so don’t know the difference between text- and font- without looking but that’s another good example of why frameworks are great: 6px is an explicit size, where md, 2xl, etc are all relative… per project you can decide what those sizes are and everything just falls into place… you rarely really care what the size is in pixels; mostly you only care about sizes relative to other parts of the UI… so again, people joining on a project don’t need to memorise magic numbers, because they just know without needing to guess what the size suffixes are
i’ve only recently started to use tailwind (originally i saw no point, pretty much for the reasons you’re stating: why use classes like that when you can just use styles on the element and we know that’s bad) but since i embraced it i’ve started writing quality components much much faster… especially for layout like flexbox and grid it just flows really nicely, and i really don’t find that it feels like i’m repeating myself at all (partly because “repeating yourself” should be avoided by simply using components these days: CSS is an over-complicated and ill-fitting solution to the problem of styling in modern UIs)
(okay i looked up text- and font-: text is size, font is weight… which tracks with my understanding of the other parts of tailwind and the way type is handled in software generally… i think there are no good options here)


the same could be said for languages that aren’t binary: what does it save you! you still have to write stuff to get the program you want, and you still have to come up with the business rules
almost all software engineering tools just save you keystrokes, or save you from needing the knowledge to implement repeatable things… or for having a standardised way of doing things so new people can approach your project without having to learn as many details (eg rails, django, nextjs, etc: the terminology and layout of such projects are familiar; daos/views/etc all behave the same)
for css frameworks for example, perhaps you have a .rounded-corners class… sure you could just implement it yourself, but if you’re using a framework you save a few minutes, the outcome is likely the same, you don’t need to know about the border radius details (and likely css frameworks implement things like shims or accessibility correctly; freeing you from needing to have deep knowledge of some esoteric details), and if the framework is big (like tailwind etc) then if you employ someone new, they know exactly what .rounded-corners means
… obviously .rounded-corners is a pretty simple example, but you can imagine when these libraries fill out with many many tools the shorthand’s get much more complex


LLMs don’t have to be random AFAIK: if you turn down the temperature parameter and send the same seed every time you get the same result
https://dylancastillo.co/posts/seed-temperature-llms.html
for most people this isn’t exactly what you want because “temperature” is sometimes short-handed as “creativity”. it controls how out of left field the result can be


i thought this too, and i just started actually working with it and DAMN is it fast… i agree that it’s kinda a technical “what the fuck are you doing?!?” but… yeah… i can’t even really explain why


yeah i remember that as well! considering the bandwidth netflix takes up i’m not surprised at all! i think it’s like 15% of global internet bandwidth or something crazy?


I’m guessing you dropped a zero or two on the user count
i was being pretty pessimistic because tbh i’m not entirely sure of the requirements of streaming video… i guess yeah 200-500 is pretty realistic for netflix since all their content is pre-transcoded… i kinda had in my head live transcoding here, but also i said somewhere else that netflix pre-transcodes, so yeah… just brain things :p
also added an extra zero to the wattage
absolutely right again! i had in my head the TDP eg threadripper at ~1500w - it’s 350w or lower


my numbers are coming from the fact that anyone who’s replacing all their streaming likely isn’t using a single disk… WD red drives (as in NAS drives) according to their datasheet use between 6 and 6.9w when in use (3.6-3.9w at idle)… a standard home NAS has 4-6 bays, and i’m also assuming that in a typical NAS setup they’re in some kind of RAID configuration, which likely means some level of striping so all disks are utilised at once… again, i think all of these are decent assumptions for home users using off the shelf hardware
i’m ignoring sleep here, because sleep for NAS drives leads to premature failure… this is why if you buy WD green drives for your NAS for example and you use linux, you wdparm to turn off sleep to avoid constantly parking and unparking the heads which leads to significantly reduced life (afaik many NAS products do this automatically, or otherwise manage it)
the top end of that estimate for drives (6 drives) is 41.4w, and the low end (4 drives) is 24w… granted, not everyone will have even those 4 drives, so perhaps my estimate is a little off, but i don’t think 30w for drives is an unreasonable assumption
again, here’s where data centres just do better: their utilisation is spread much more evenly… the idle power of drives is not hugely less than their full speed read/write, so it’s better to have constant access over fewer drives, which is exactly what happens with DCs because they have fewer traffic spikes (and can legitimately manage drive power off for hours at a time because their load is both predictable, and smoother due just to their scale)
also, as someone else in the thread mentioned: my numbers for severs were WAY off for a couple of reasons, but basically
Back of the envelope math says that’s around 0.075 watts per individual stream for a 150w 2U server serving 2000 clients, which looks pretty realistic to my eyes as a Sysadmin.
that also sounds realistic to me, having realised i fucked up my server numbers by an order of magnitude for BOTH power use, and users served
servers and data centres are just in a class of their own in terms of energy efficiency
here for example: https://www.supermicro.com/en/products/system/storage/4u/ssg-542b-e1cr90
this is an off the shelf server with 90 bays that has a 2600w power supply (which even then is way overkill: that’s 25w per drive)… with 22tb drives (off the top of my head because that’s what i use, as it is/was the best $/byte size) that’s almost 2pb of storage… that’s gonna cover a LOT of people with that 2600w, and imo 2600w is far beyond what they’re actually going to be pulling


an n150 mini pc - largely considered a very efficient package for home servers - consumes ~15w max without the gpu, and ~9w idle
a raspberry pi consumes 3-4w idle
none of that is supporting more than a couple of people streaming 4k like we’re talking about in the case of netflix
and a single hard drive isn’t even close to what we’re talking about… you’re looking at ~30w at least for the disks alone
as for internet cost, it’s likely tiny… my 24 port gigabit switch from 15 years ago sips < 6w… i can only imagine that’s pretty inefficient compared to today’s standards (and 24 port is pretty tiny for a DC, and port power consumption doesn’t scale linearly)
data centres are just straight up way more efficient per unit of processing than your home anything; it pretty much doesn’t matter how efficient your home gear is, or what the workload is unless you switch it off most of the time - which doesn’t happen in a DC


self hosting is wildly less efficient… one of the biggest costs in data centres is electricity, and one of the biggest constraints is electrical infrastructure… you have pretty intense power budgets in data centres and DC equipment is pretty well optimised to be efficient
meanwhile a home server doesn’t likely use server hardware (server hardware is far more efficient), is probably about 5-10y or more out of date, and isn’t likely particularly dense: a single 1500w server can probably service ~20 people in a DC… meanwhile an 800w home server could probably handle ~5 people
add the fact that netflix pre-transcodes their vids in many different qualities and formats, whilst home streaming - unless streaming original quality - mostly re-transcodes which is a very energy-hungry process
heck even just the hard drives: if everyone ran their own servers and stored their content that’s thousands if not hundreds of thousands more copies of the data, and all that data is probably on spinning disks


i’d also say manufacturing the devices probably roughly doubles the carbon footprint (same with the car but we’re trying every trick in the book to figure out where the figure came from)


this law covers the fediverse. aussie.zone now has a verification process
i agree with the above commenter: something should be done, but this is the wrong way to do it… it creates problems and effectively solves none
real vibes of
The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.
- Malcom (cunt) Turnbull - a conservative ex-PM


the vuln afaik is for remote code execution via basically a mechanism that’s kinda like a transparent RPC to the server (think like you just write frontend code with like a “getUsers” and it just automatically retrieves and deserializes the results so you can render the UI without worrying about how that data exists in the browser)
i’m not a front end engineer, and haven’t used react server components, but i am a principal software engineer, i do react for personal projects, and have written react professionally
i can’t think of a way it’d be exploitable via purely client-side means
i THINK what they mean is that you can use some of the RSC stuff without the RPC-style interfaces, and in that case they say the server component is still vulnerable, but you still need react things running on your server
a huge majority of react code is client-side only, with server-side code written in other languages/frameworks and interfaces with something like REST or GraphQL (or even RPC of course)