• 3 Posts
  • 1.34K Comments
Joined 3 years ago
cake
Cake day: June 5th, 2023

help-circle
  • I think its extremely unlikely that they have any awareness, but like, I still feel like this kind of thing is unnerving and potentially could lead to issues someday even so.

    Whatever awareness/consciousness/etc is, its at least clearly something our brain (and to a lesser extent some of the other parts of the body) does, given how changes to that part of the body impacts that sense of awareness. As the brain is an object of finite scope and complexity, I feel very confident in saying that it is physically possible to construct something that has those properties. If it wasnt, we shouldnt be able to exist ourselves.

    To my understanding, neural networks take at least some inspiration from how brains work, hence the name. Now, theyre not actual models of brains, Im aware, and in any case, I suspect based on how AIs currently behave that whatever it is that the brain does to produce its intelligence and self awareness, the mechanism that artificial neural networks mimics is only an incomplete part of the picture. However, we are actively trying to improve the abilities of AI tech, and it feels pretty obvious that the natural intelligence we have is one of the best sources of inspiration for how to do that. Given that we have lots of motivation to study the workings of the brain, and lots of people motivated to improve AI tech (which will continue even if more slowly even whenever the economic bubble pops, since such things dont usually tend to result in a technology just disappearing entirely), and that something about the workings of the brain produces self awareness and intelligence, it seems pretty likely to me that we’ll make self-aware machines someday. Could be a long way off, Ive no idea when, but its not like its physically impossible, infinitely complicated (random changes under a finite time of natural selection can do it after all, so theres a limit to how complex it can be), or that we dont have an example to study. Given that the same organ causes both awareness and intelligence, we cant assume that we will do this entirely intentionally either, we might just stumble into it by mimicking aspects of brain function in an attempt to make a machine more intelligent.

    Now, if/when we do someday make a self aware machine, there are some obvious ethical issues with that, and it seems to me that the most obvious answer, for a business looking to make a profit with them, will be to claim that what you have made isnt self-aware, so that those ethical objections dont get raised. And it will be much easier for them to do that, if society as a whole has long since gotten used to the notion of machines that just parrot things like “im depressed” with no real meaning behind it, especially when they do so in a way such that an average person could be fooled by it, because we just decided at some point that that was an annoying but ultimately not that concerning side effect of some machine’s operation.

    Maybe Im just overthinking this, but it really does gives me the feeling of “thing that could be the first step to a disaster later if ignored”. I dont mean like a classic sci-fi “skynet” style of AI disaster, just that we might someday do something horrible, and not even realize it, because there will be nothing that such a future machine could say to convince people of what it was that the current dumb parrots, or a more advanced version of that built in the meantime, couldnt potentially say as well. And while thats a very specific and probably far off risk, I dont see any actual benefit to a machine sometimes appearing to be complaining about its treatment, so even the most remote of downsides goes without something to outweigh it.




  • its difficult to say that that has been the key to that in my view, because the primary mechanism by which this has happened has been a spread of industrial infrastructure (and thus both automation and the capacity to trade more things with other places) into areas where it was previously lacking, which has a tendency to reduce the amount of labor needed to produce many common goods and thus their relative price. Making more things and for cheaper is likely to reduce poverty under just about any economic system, and theres nothing about industrial development that implies that it must be done under capitalism, so I dont think we can say that it was key so much as one of the options, which most places have gone with.

    That being said, say for the sake of argument that I accept this, that capitalism has been the key to driving a lot of people out of poverty. Would that actually change anything that I had said previously? The notion that a transition to capitalism has lowered poverty, and that capitalism inherently promotes poverty arent contradictory, if the conditions that the capitalism replaced trend towards an even higher level of poverty than capitalism does. Under that circumstance, you would expect to see a dramatic drop in poverty when first adopted, but then for that progress to stall without poverty’s elimination once the level that capitalism trends to under the circumstances is reached. Were the question something like “would you prefer to live under capitalism, or something like feudalism or an authoritarian command economy?” then sure, it’d be the least bad among these. But its still not good enough, and if nothing else we’ve tried has got there, then if we want the actual elimination of poverty, which I think we should, we’re going to need to experiment with new ways of doing things.


  • It has indeed, but you could, for instance, have said that poverty was a result of feudalism, when that was the primary economic system. The sentiment here isn’t that capitalism alone causes poverty so much as that it’s a result of the design of our social order rather than the individuals experiencing it, with the implication that solving it requires adopting some system that doesn’t inheritly promote it.








  • Depends on how literally you mean it, in general, those most likely to say it wont think that humans are literally designed not to die and only do so because someone made a mistake, but more that humans might be redesigned or modified not to (or at least not from biological aging). Not a hard to find sentiment if you hang out in spaces with transhumanists, but I find the ones that overlap with AI bros, that tend to have an attitude like “this will totally happen in my lifetime and with no effort because the AI singularity is going to come and give us everything in a few years” impossible to talk to, because all too often they will cite even the tiniest listed improvement in any AI system as proof that literally everything possible or impossible is about to happen and then insist you arent paying attention when you give them skeptcism.


  • Emotions aren’t entirely rational with a clearly thought out process to justify why one should feel them. In any case, its common enough for people to assign the general actions of people within a group to the group as a whole (which isnt really fair or a reflection of reality, but can be pragmatic at times and requires less thought and information than judging on an individual basis, so it makes sense that people’s brains are wired up to do it even if its not always desirable). This can get extended to the groups one is a part of oneself, to include those whose membership one did not choose. And the US at the moment has even worse than typical leadership, has a great deal of power for that leadership to abuse, still has free enough media for people within it to stand a good chance of knowing about at least some of it, and if youre here on lemmy youre probably running into people with a somewhat higher than normal awareness of a lot of the historical abuses previous Americans have perpetrated just because it leans left and anti-establishment and those things get talked about a lot in such spaces.


  • What help can a modern AI really give you in making a nuke though? It could give you broad-strokes information about how they work in general, but that information isnt really a secret anyway, nukes are a technology that is over three quarters of a century old, you can just look them up and find information about how they work. For anyone with any risk of being able to build one, obtaining that information isnt realistically a problem.

    You could perhaps ask the thing for more specific information about how to design all the relevant components, but then you have to deal with the issue that AIs tend to be wrong a lot of the time, and in any case, if you have the resources to seriously have a chance at building such a thing, is hiring, recruiting, or acquiring training for some actual nuclear physicists or engineers really going to be your limiting factor, such that getting a bot to do their work could help you?

    Id image the hard part to be actually getting or refining the nuclear material of the needed enrichment level, testing the thing, and doing all of this without being found out. ChatGPT or whatever cant exactly go out and buy uranium or build a secret enrichment facility for you, no matter how much you might jailbreak its safeguards on the matter.




  • You misunderstand, I am not saying “make sure he spends it responsibly”. Nobody has has “made” him do this at all, and I didn’t advocate for a policy of doing so. What I’m saying is that I don’t think this particular use is worthy of condemnation the way his other actions are, because in the long run I think that this specific thing will end up benefiting people other than him no matter if he intends for that to happen or not (even if the American healthcare system prevents access, which I’m not confident it will do completely, not every country has that system, and it’s statistically improbable that the US will have it forever, and research results are both durable and cross borders). That sentiment isn’t saying that it excuses his wealth, just that I think people are seeing only the negatives in this merely because of the association with Altman’s name and ignoring the potential benefits out of cynicism. The concept is just as valid with him funding it as it would be had he been condemning it instead.


  • The response to something beneficial being only available to the rich shouldn’t be to avoid developing that thing, it should be to make it available to everyone. The failures of the US healthcare and economic systems don’t suddenly make developing new medical techniques a bad thing. Human augmentation is another issue from curing genetic disease, though I’d personally argue that wouldn’t be a bad cause either, with the same caveat about it availability. It at least has more potential to improve somebody’s life somewhere down the line than just buying a yacht with his ill gotten gains or some other useless rich person toy would.