Salamander

  • 14 Posts
  • 179 Comments
Joined 4 years ago
cake
Cake day: December 19th, 2021

help-circle
  • I would give the Letharia dye another try

    Would love to… When I was in Oregon this lichen was super abundant. At the moment I am living in Amsterdam (Netherlands), and I see mostly Xanthoria, Evernia, Rhizocarpon, and a few other lichen species that grow on city trees, but they are very small and spotty, nothing compared to the wolf lichen in Oregon. I do miss the Oregon forests with the old growth sequoia redwood trees and all that lichen.


  • 9ft of snow?! I only experienced such deep snow in an urban setting while living in Connecticut for a year. I spent a few years in Oregon but the snow in the area never got so deep while I was there. When I was in the US I was not yet able to identify many fungi as I was mainly obsessed with animals (especially salamanders) back then, so unfortunately I did not really appreciate the diversity of fungi there. Although once in Oregon I did attempt to dye some socks using a wolf lichen (Letharia vulpina) and a pressure cooker. That did not end well.



  • Cool! I just read their wiki page and it says

    A snowbank fungus, it is most common at higher elevations after snowmelt in the spring.

    Snowbank fungus is a new term for me. Not sure yet what makes a fungus thrive through snow. Maybe they have anti-freeze proteins?

    Does your area get a lot of snow?







  • I bought a National Instrument’s data acquisition card (PCIe-6535B) not knowing that National Instruments is not very Linux-friendly and I was not able to get it working. At least it was a used card so I did not pay to much for it, but I learned my lesson not to assume compatibility.

    Once I also used ‘rm -rvf *’ from my home directory while SSH’d into a supercomputer (I made a syntax error when trying to cd into the folder that I actually wanted to delete). I was able to get my data restored from a backup, but sending that e-mail was a bit embarrassing 😆




  • Here, I’m assuming “it” is a conscious perception. But now I’m confused again because I don’t think any theory of mind would deny this.

    Yes, the example of such a theory that is common is epiphenomalism. What I am contrasting in my answers is the epiphenomalist/hard-determinist framework with the physicalist/compatibilist one.

    stimuli -> CPM ⊆ brain -> consciousness update CPM -?> black box -?> mind -?> brain -> nervous system -> response to stimuli

    I can try to explain with such a diagram:

    stimuli -> nerves -> brain input ports -> brain filtering and distribution -> Conscious brain processing via causal predictive modelling -> brain output ports -> nerves -> conscious action
                                                                                              |
                                                                                              -- > Unconscious processing -> brain output ports -> nerves -> unconscious action
    
    

    So, the CPM is a process within the brain. The idea is that the brain is a computer that makes predictions by building cause-and-effect models. What is interesting about the mathematics of causal models is that the underlying engine is the counterfactual. The claim being made here is that mind itself is this counterfactual engine doing its work. The computational space that deals with the counterfactuals or “fantasies” is the essence of the mind.

    This is not in any way a solution to the hard problem of consciousness. Rather, it is one of many frameworks compatible with physicalism, and it is the one I personally subscribe to. In this framework, it is a postulate that conscious experience corresponds to the brain’s counterfactual simulations within a generative model used for predicting and guiding action. This postulate does not prove or mechanistically explain consciousness. No physical theory currently does.


  • I’m going to stick with the meat of your point. To summarize, …

    That is not quite how I see it. The linear diagram “brain -> black box -> mind” represents a common mode of thinking about the mind as a by-product of complex brain activity. Modern theories are a lot more integrative. Conscious perception is not just a byproduct of the form brain -> black box -> mind, but instead it is an essential active element in the thought process.

    Ascribing predictions, fantasies, and hypotheses to the brain or calling it a statistical organ sidesteps the hard problem and collapses it into a physicalist view. They don’t posit a mind-body relationship, they speak about body and never acknowledge the mind. I find this frustrating.

    That text was probably written by a materialist / physicalist, and this view is consistent within this framework. It is OK that you find this frustrating, and it is also alright if you don’t accept the materialist / physicalist viewpoint. I am not making an argument about materialism being the ultimate truth, or about materialism having all of the answers - especially not answers relating to the hard problem! I am specifically describing how different frameworks held by people who already hold a materialist view can lead to different ways of understanding free will.

    Scientists often do sidestep the hard problem in the sense that they acknowledge it to be “hard” and keep moving without dwelling on it. There are many philosophers (David Chalmers, Daniel Dennett, Stuart R. Hameroff), that do like getting into the nitty-gritty about the hard problem, so there is plenty of material about it, but the general consensus is that the answers to the hard problem cannot be find using the materialist’s toolkit.

    Materialists have is a mechanism for building consensus via the scientific method. This consensus mechanism has allowed us to understand a lot about the world. I share your frustration in that this class of methods does not seem to be capable of solving the hard problem.

    We may never discover a mechanism to build consensus on the hard problem, and unfortunately this means that answers to many very important questions will remain subjective. As an example, if we eventually implement active inference into a computer, and the computer claims to be conscious, we may have no consensus mechanism to determine whether they “really” are conscious or not, just as we cannot ascertain today whether the people around us are conscious. In my opinion, yes, it is physically possible to build conscious systems, and at some point it will get tricky because it will remain a matter of opinion. It will be an extremely polarizing topic.