Yes, but not normal walnuts, black walnuts. What most people think of as walnuts, at least where I’m from, come from the Persian/English walnut tree, Juglans Regia.
Yes, but not normal walnuts, black walnuts. What most people think of as walnuts, at least where I’m from, come from the Persian/English walnut tree, Juglans Regia.
Green almonds, right?
Pili nut?
I can only wish…
Honestly, I don’t think this could have gone any other way. As the article says, this is the so-called “Lane 1” missions which, while they’re less important and less heavy than “Lane 2” missions, are perfectly suited to the Falcon 9, or heavy if needed. There’s no need to get a heavy rocket like Vulcan involved, especially since it’s not proven yet, so the only reason I could think of for someone else winning would essentially be as a subsidy to help SpaceX’s competition. Once Vulcan has a couple dozen successful missions under its belt I’ll expect it to be more competitive.
If we stop doing business with SpaceX, we immediately demolish most of our capability to reach space, including the ISS until Starliner quits failing. Perhaps instead of trying to treat this as a matter of the free market we should recognize it as what it is - a matter of supreme economic and military importance - and force the Nazi fucker out.
I’d be interested in setting up the highest quality models to run locally, and I don’t have the budget for a GPU with anywhere near enough VRAM, but my main server PC has a 7900x and I could afford to upgrade its RAM - is it possible, and if so how difficult, to get this stuff running on CPU? Inference speed isn’t a sticking point as long as it’s not unusably slow, but I do have access to an OpenAI subscription so there just wouldn’t be much point with lower quality models except as a toy.
Bevy, cause I’m a sucker for Rust
Kessler syndrome isn’t really that much of a risk specifically with Starlink (for now at least), as SpaceX seems to be doing things right despite Musk. They’re in such low orbits that even with a catastrophic loss of control, they’ll deorbit very quickly. The real risk comes as more companies and countries try to get a piece of the megaconstellation pie. Starlink in its own seems to be fairly safe and sustainable on its own, but that may quickly change when communication for collision avoidance maneuvers needs to be international.
Despite Musk’s well-earned reputation for being a shithead, SpaceX has this far been doing the right thing far more often than most other space companies, and while it’s certainly possible that will change, the Starlink constellation will entirely disappear very quickly without constant replenishment, so it’s not as if we’d have no chance to act if they begin to show signs of concerning behavior. What’s far more worrying to me in terms of Kessler syndrome is the recent escalation around space warfare, as tensions between Russia, China, and the US continue to boil and nobody seems willing to really commit to making space a neutral zone. Even with space historically being an area of strong international cooperation despite politics (just look at the ISS), that unfortunately seems to be rapidly changing.
Well they said .NET Framework, and I also wouldn’t be surprised if they more or less wrapped that up - .NET Framework specifically means the old implementation of the CLR, and it’s been pretty much superseded by an implementation just called .NET, formerly known as .NET Core (definitely not confusing at all, thanks Microsoft). .NET Framework was only written for Windows, hence the need for Mono/Xamarin on other platforms. In contrast, .NET is cross-platform by default.
Holy shit. I knew they were going to simplify Raptor a lot, but even knowing most of the rat’s nest was sensors/etc., this is insane. I wish we could see cross sections!
Honestly, after DOS2, I’d play a Larian game in any setting just based on them being the devs - and that goes double after BG3. Their handle on storytelling and environments is so good I’d trust it would be enjoyable even in a setting I’m not interested in.
This is a use-after-free, which should be impossible in safe Rust due to the borrow checker. The only way for this to happen would be incorrect unsafe code (still possible, but dramatically reduced code surface to worry about) or a compiler bug. To allocate heap space in safe Rust, you have to use types provided by the language like Box
, Rc
, Vec
, etc. To free that space (in Rust terminology, dropping it by using drop()
or letting it go out of scope) you must be the owner of it and there may be current borrows (i.e. no references may exist). Once the variable is drop
ed, the variable is dead so accessing it is a compiler error, and the compiler/std handles freeing the memory.
There’s some extra semantics to some of that but that’s pretty much it. These kind of memory bugs are basically Rust’s raison d’etre - it’s been carefully designed to make most memory bugs impossible without using unsafe
. If you’d like more information I’d be happy to provide!
That’s the point. Malicious compliance.
IIRC it’s an APU thing, and last I heard it was just a rumor (could be out of date). Either way, non-LTSC is EOL in a year and a half. If you’re putting in a Zen 5 CPU, the best choice is realistically either Linux or Windows 11 Pro, since Pro can turn off all the bullshit through group policy. My Windows machine I have to have is on 11 Pro and it’s basically Windows 10 with a slightly different taskbar. No Copilot bullshit, no ads, no Bing in Windows Search. If you’re ok your taskbar on the bottom of the screen, IMO it’s the best choice as long as you have to use Windows.
I’m only an armchair physicist, but I believe this isn’t possible due to relativity. I know that, at least, there are cases where two observers can disagree on whether an event occurred simultaneously. Besides all the other relativity weirdness, that alone seems to preclude a truly universal time standard. I would love for someone smarter than me to explain more and/or correct me though!
I was very intrigued by a follow-up to the recent numberphile video about divergent series. It was a return to the idea that the sum of the integers greater than zero can be assigned the value -1/12. There were some places this could be used, but as far as I know it was viewed as shaky math by a lot of experts.
As far as I recall the story goes something like this: now, using a new technique Terrence Tao found, a team was seemingly able to “fix” previous infinities in quantum field theory - there’s a certain way to make at least some divergent series work out to being a real number, and the presenter proposed that this can be explained as the universe “protecting us” from the infinities inherent in the math.
It made me think about other places infinities show up in modern physics (namely, singularities in general relativity) and whether a technique something like this could “solve” them without a whole new framework like string theory is.
Even as an (older) zoomer in the US, this was never a thing for me. No one cared what phone you used. If you had an Android you wouldn’t be in iMessage group chats but no one judged you for it.
It was honestly just dumb luck. I had heard of these previously from a friend who had some in the Philippines. I would say, really, I do know nothing about nuts, relatively speaking :)