There are a couple I have in mind. Like many techies, I am a huge fan of RSS for content distribution and XMPP for federated communication.

The really niche one I like is S-expressions as a data format and configuration in place of json, yaml, toml, etc.

I am a big fan of Plaintext formats, although I wish markdown had a few more features like tables.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      15 days ago

      S-expressions are basically directly writing the AST a compiler would normally generate. They can be extremely flexible. M-expressions were supposed to be programming part of Lisp, and S-expressions the data part. Lisp programmers noticed that code is just another kind of data to be manipulated and then only used S-expressions.

      Logo is arguably a Lisp with M-expressions. But whatever niche Logo had is taken by Python now.

    • Cyclohexane@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      The appeal for json and yaml is readability, and partially ease of parsing. I say s-expressions win over both in both aspects.

      Can you please expand on your references to no-sql and your reference to “lightweight markup”? I don’t quite understand what you meant there.

  • boramalper@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    ActivityPub :) People spend an incredible amount of time on social media—whether it be Facebook, Instagram, Twitter/X, TikTok, and YouTube—so it’d be nice to liberate that.

      • litchralee@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        17 days ago

        It’s also worth noting that switching from ANSI to ISO 216 paper would not be a substantial physical undertaking, as the short-side of even-numbered ISO 216 paper (eg A2, A4, A6, etc) is narrower than for ANSI eqjivalents. And for the odd-numbered sizes, I’ve seen Tabloid-size printers in America which generously accommodate A3.

        For comparison, the standard “Letter” paper size (aka ANSI A) is 8.5 inches by 11 inches. (note: I’m sticking with American units because I hope Americans read this). Whereas the similar A4 paper size is 8.3 inches by 11.7 inches. Unless you have the rare, oddball printer which takes paper long-edge first, this means all domestic and small-business printers could start printing A4 today.

        In fact, for businesses with an excess stock of company-labeled #10 envelopes – a common size of envelope, measuring 4.125 inches by 9.5 inches – a sheet of A4 folded into thirds will still (just barely) fit. Although this would require precision folding, that’s no problem for automated letter mailing systems. Note that the common #9 envelope (3.875 inches by 8.875 inches) used for return envelopes will not fit an A4 sheet folded in thirds. It would be advisable to switch entirely to A series paper and C series envelopes at the same time.

        Confusingly, North America has an A-series of envelopes, which bear no relation to the ISO 216 paper series. Fortunately, the overlap is only for the less-common A2, A6, and A7.

        TL;DR: bring reams of A4 to the USA and we can use it. And Tabloid-size printers often accept A3.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      17 days ago

      Also, A4 simply has a better ratio than letter. Letter is too wide, making A4 better to hold and it fits more lines per page.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      Presumably you could just buy that paper size? They’re pretty similar sizes; printers all support both sizes. I’ve never had an issue printing a US Letter sized PDF (which I assume I have done).

      Kind of weird that you guys stick to US Letter when switching would be zero effort. I guess to be fair there aren’t really any practical benefits either.

  • BB_C@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    The term open-standard does not cut it. People should start using “publicly available and sharable” instead (maybe there is a better name for it).

    ISO standards for example are technically “open”. But how relevant is that to a curious individual developer when anything you need to implement would require access to multiple “open” standards, each coming with a (monetary) price, with some extra shenanigans [archived] on top.

    IEEE standards however are actually truly open, as in publicly available and sharable.

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      why do we call standards open when they require people to pay for access to the documents? to me that does not sound open at all

      • BB_C@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        Because non-open ones are not available, even for a price. Unless you buy something bigger than the “standard” itself of course, like a company that is responsible for it or having access to it.

        There is also the process of standardization itself, with committees, working groups, public proposals, …etc involved.

        Anyway, we can’t backtrack on calling ISO standards and their likes “open” on the global level, hence my suggestion to use more precise language (“publicly available and sharable”) when talking about truly open standards.

      • frezik@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        15 days ago

        It’s a historical quirk of the industry. This stuff came around before Open Source Software and the OSI definition was ever a thing.

        10BASE5 ethernet was an open standard from the IEEE. If you were implementing it, you were almost certainly an engineer at a hardware manufacturing company that made NICs or hubs or something. If it was $1,000 to purchase the standard, that’s OK, your company buys that as the cost of entering the market. This stuff was well out of reach of amateurs at the time, anyway.

        It wasn’t like, say, DECnet, which began as a DEC project for use only in their own systems (but later did open up).

        And then you have things like “The Open Group”, which controls X11 and the Unix trademark. They are not particularly open by today’s standards, but they were at the time.

    • Cyclohexane@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      I never really quite understood IPFS and why it gets used where I see it today. What problem is it solving?

      • madnificent@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        IPFS would replace Content Delivery Networks in present day.

        It would also allow you to host software and other content from your own network again without the constraints modern Internet Service Providers pose on you to limit your self-hosting capabilities.

        If applications are built for it, it could serve as live storage for your applications too.

        We ran ipf-search. In one of the experiments we could show that a distributed search index on ipfs-search, accessible through JavaScript is likely feasible with the necessary research. Parts of the index would automatically be hosted by clients who used the index thus creating a fairly resilient system.

        Too bad IPFS couldn’t get over the technical hurdles of limiting connection setup time. We could get a fast (ElasticSearch based) index running and hosted over common web technologies, but fetching content from IPFS directly was generally rather slow.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          Would you be interested in a similar protocol that supports more things (and is IMO easier to set up)?

          • madnificent@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            15 days ago

            I’m not actively looking but please do share references! Other people may read this and they may want to know too. Perhaps I’ll jump back in the rabbit hole at some point too 😁

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              15 days ago

              Okay here it goes!

              Tenfingers sharing protocol & python implementation (your python needs cryptodomex, or use the frozen executables).

              http://tenfingers.org

              You share theirs, they share yours (all encrypted)! So no benevolent nodes or crypto and it’s 100% decentralised.

              I’m working on a better documentation on how to set it up (just forward a port and run setup basically).

              • madnificent@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                11 days ago

                I had to read the overview and it looks nice. It reads like IPFS without some of the challenging cruft. Well written!

                IPFS seemingly works small scale but not large scale. What makes tenfingers handle millions of files and petabytes of data better than IPFS? Perhaps that is not the goal. In what way do you think the tech scales? Why will discovery of the node which has the data be short?

                I want to ask for benchmarks but you can’t do a full benchmark without loads of resources.

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  11 days ago

                  Thanks!

                  IPFS is static, whereas tenfingers is dynamic when it comes to the links. So you can update the shared data without the need of redistributing the link.

                  That said, its also very different tech wise, there is no need for benevolent nodes (or some crypto or payment).

                  Nodes do not need to be trustworthy either, so node discovery is very simple (basically just ask other nodes for known nodes).

                  The distribution part, where nodes share your data, is based on reciprocal sharing, you share theirs and they share yours. If they don’t share any more (there are checks) you just ditch the deal and ask for a new deal with another node.

                  With over sharing (default is you share your data with 10 other nodes, sharing their data) this should both make bad nodes a no problem, but also make for good uptime and takedown safety.

                  This system also makes it scalable infinitely node wise, as every node does not need to know all other nodes, just enough for their need (for example thousands out if millions of existing nodes).

                  To share lots if data, you need to bring enough storage and bandwith to the table because it’s reciprocal, so basically it’s up to your node how much it can share.

                  Big data sets are always complicated because of errors and long download times, I have done 300MB files without problems, but the download process sure can be made better (with parallel downloading for example and better error handling).

                  I haven’t worked on sharing way bigger datasets, even a simple terabyte is a pita to download on the regular internet :-) and the use case is more the idea of sharing lots of smaller data, like a website for example, or a chat.

                  What do you think, am I missing something important? Or of course if you have other questions please do ask!

                  Also, sorry I’m writing this on my mobile so it’s not very well written.

                  Edit: missed one question; getting the data is straight forward to use (a bit complicated how it’s handled because of the changing nature of things) but when you download, you have the addresses of the nodes sharing your data so you just connect to one of them and download it (or the next if the first one isn’t up etc and so on). So that should not be any kind of bottleneck.

  • webbureaucrat@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    I’ll give my usual contribution to RSS feed discourse, which is that, news flash! RSS feeds support video!

    It drives me crazy when podcasters are like, “thanks for listening to our audio podcasts. We also have a video feed for our YouTube subscribers.” Just let me have the video in PocketCasts please!

    • 0x1C3B00DA@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      I feel you but i dont think podcasters point to youtube for video feeds because of a supposed limitation of RSS. They do it because of the storage and bandwidth costs of hosting video.

  • ulterno@lemmy.kde.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I like the Doxygen’s implementation and extension of Markdown. Pair it with PlantUML and you have something worth being a standard.

  • filister@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    17 days ago

    The metric system, f*ck the imperial system. Every scientist sticks to the metric system, and why are people even still having an imperial system, with outdated measurements like stones for weight blows my mind.

    Also f*ck Fahrenheit, we have Celsius and Kalvin for that, we don’t need another hard to convert temperature measurement.

    • mox@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      16 days ago

      Also f*ck Fahrenheit, we have Celsius and Kalvin for that,

      Who is Kalvin? Did you mean kelvin?

      One drawback of celsius/centigrade is that its degrees are so coarse that weather reporting ends up either inaccurate or complicated by floating point numbers. I’m on board with using it, but I won’t pretend it’s strictly superior.

      • tleb@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        A degree Celsius is not coarse and does not require decimals in weather reports, and I suspect only a person who has never lived in a Celsius-using country could make such silly claims.

        • mox@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          16 days ago

          A degree Celsius is not coarse and does not require decimals

          Consider that even if the difference between 15° and 16°C is not significant to you, it very well might be to other people. (Spoiler: it is.)

          I suspect only a person who has never lived in a Celsius-using country could make such silly claims.

          Then your suspicions are leading you astray.

          • RecluseRamble@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            16 days ago

            They didn’t say a difference of 1K isn’t significant but the difference of 0.1K isn’t.

            And since the supposed advantage of Fahrenheit is that it better reflects typical ambient temperatures, we have to consider relevance for average people. Hardly anyone will feel a difference of 0.1K.

            That’s why European weather reports usually show full degrees. And also our fridges show full degrees.

            • WldFyre@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              15 days ago

              What about thermostats for homes? I can absolutely feel a 2 deg F difference

              • ulterno@lemmy.kde.social
                link
                fedilink
                English
                arrow-up
                0
                ·
                15 days ago

                I use °C and I feel the need to use the places after the decimal. Also, I feel nothing wrong about it.

                Also, I use °F for body temperature measurement and need to use the places after the decimal and feel fine with it.

                Also, when using °C for body temperature, I still require the same number of decimal places as I require for °F.

                I am not saying that °F is not useful, but I am invalidating your argument.

    • kn33@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      I’ll fight you on fahrenheit. It’s very good for weather reporting. 0° being “very cold” and 100° being “very hot” is intuitive.

      • scaramobo@lemmynsfw.com
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        Ask someone in the north of finland how hot is “very hot”, and how cold is very cold. Then ask the same in middle Africa. Spoiler: it will vary alot.

      • filister@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        17 days ago

        0 degrees Celsius, the water is freezing, 100 degrees Celsius, the water is boiling. Celsius has a direct link to Kelvin, and Kelvin is the SI unit for measurement temperatures.

      • SorteKanin@feddit.dk
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        Knowing whether it may snow or rain depending on whether you are below or above 0 is very useful though. 0 and 100 are only intuitive because you’re used to those numbers. -20 bring very cold and 40 being very hot is just as easy.

      • RecluseRamble@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        For traffic Celsius is more intuitive since temps approaching zero means slippery roads.

        You’re long passed that with Fahrenheit. And on a scale from 0 very cold to 100 very hot, 32 doesn’t seem that cold. Until you see the snow outside.

          • Minnesotan here. Can confirm that 32 is still long-sleeve shirt weather.

            I regularly see people here walking into a store from the parking lot in T-shirts, in 32° weather. Wind chill makes a far greater difference. 38° from wind chill is far colder than 32° with no wind.

          • ulterno@lemmy.kde.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            my sense of temperature is much different than someone from somewhere warm

            That’s probably the reason for this preference.

            10°C for me means my PC doesn’t heat up the room enough and I need a heater.

      • arendjr@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        17 days ago

        0° being “very cold” and 100° being “very hot” is intuitive.

        As someone who’s not used to Fahrenheit I can tell you there’s nothing intuitive about it. How cold is “very cold” exactly? How hot is “very hot” exactly? Without clear references all the numbers in between are meaningless, which is exactly how I perceive any number in Fahrenfeit. Intuitive means that without knowing I should have an intuitive perception, but really there’s nothing to go on. I guess from your description 50°F should mean it’s comfortable? Does that mean I can go out in shorts and a t-shirt? It all seems guesswork.

        • Remavas@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          About the only useful thing I see is that 100 Fahrenheit is about body temperature. Yeah, that’s about the only nice thing I can say about Fahrenheit. All temperature scales are arbitrary, but since our environment is full of water, one tied to the phase changes of water around the atmospheric pressure the vast majority of people experience just makes more sense.

          • AnAmericanPotato@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            16 days ago

            All temperature scales are arbitrary, but since our environment is full of water, one tied to the phase changes of water around the atmospheric pressure the vast majority of people experience just makes more sense.

            But when it comes to weather, the boiling point of water is not a meaningful point of reference.

            I suppose I’m biased since I grew up in an area where 0-100°F was roughly the actual temperature range over the course of a year. It was newsworthy when we dropped below zero or rose above 100. It was a scale everybody understood intuitively because it aligned with our lived experience.

      • tleb@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        This is strictly untrue for many climates. Where I live in Canada, 0F is average winter day, 100F is record-breaking “I might actually die” levels of heat.

        -30C to 30C is not any more complicated or less intuitive than -22F to 86F

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 days ago

    I wish standards were always open access. Not behind a 600 dollar paywall.

    When it is paywalled I’m irritated it’s even called a standard.

  • fubarx@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    Since nobody’s brought it up: MQTT.

    It got pigeonholed into IoT world, but it’s a pretty decent event pubsub system. It has lots lf security/encryption options, plus a websocket layer, so you can use it anywhere from devices, to mobile, to web.

    As of late last year, RabbitMQ started suporting it as a supported server add-on, so it’s easy to use it to create scalable, event-based systems, including for multiuser games.

    • antimongo@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      I spun up a MQTT/Aedes/MongoDB stack on my network recently for some ESP32 sensors.

      Fantastic protocol and super easy to work with!

      • fubarx@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        Installed RabbitMQ for use in Python Celery (for task queue and crontab). Was pleasantly surprised it also offered MQTT support.

        Was originally planning on using a third-party, commercial combo websocket/push notification service. But between RabbitMQ/MQTT with websockets and Firebase Cloud Messaging, I’m getting all of it: queuing, MQTT pubsub, and cross-platform push, all for free. 🎉

        It all runs nicely in Docker and when time to deploy and scale, trust RabbitMQ more since it has solid cluster support.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      XMPP is not a good protocol though. There’s a reason nobody uses it anymore.

      I think it’s going to be interesting when the EU tries to enforce interoperability between the major messaging platforms. What are they going to do? They have some ridiculous targets like interoperable end-to-end encrypted group video calls in 5 years!

      • 0x0@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        There’s a reason nobody uses it anymore.

        Yeah, Google and Faceebook EEE’d it.

        XMPP is not a good protocol though.

        Do elaborate.

        • endofline@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          XMPP is very old and was created when nobody knew about mobile phones. It worked more like true messaging app less than messages store ( unlike matrix ).

          Requirement of permanent tcp ip connection doesn’t work well for mobile + pretty much useful feature in xmpp ( like message history ) is optional. If something doesn’t work in xmpp most people would blame xmpp / jabber rather than the lack of feature support in their server

          • 0x0@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            17 days ago

            XMPP is very old

            Seriously? That’s your argument? So is the wheel.

            Requirement of permanent tcp ip connection doesn’t work well for mobile

            I was under the impression PubSub was created for that.

            Still, it’s an open extensible protocol.

            • MonkderVierte@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              16 days ago

              XMPP is very old

              Seriously? That’s your argument? So is the wheel.

              They elaborated how that relates; usage scenario changed with mobile phones. XMPP is a bad match.

              • 0x0@programming.dev
                link
                fedilink
                arrow-up
                0
                ·
                16 days ago

                XMPP is a bad match.

                The X is for extensible, so are a whole bunch of other protocols and people haven’t stopped using them, they get improved upon (for the most part).

            • endofline@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              16 days ago

              Seriously, if you do take one verse from the whole response, you get straw men you fighting with.

              I just told you that jabber / xmpp was created in the times almost nobody knew or believed mobile phones can be a thing. Thus it got created in that way: many similarities of xmpp and e-mail, irc or icq which didn’t stand the passage of time.

              Of course, you’re right xmpp evolved to get PubSub extension as an “optional feature” but because of its availability (or rather lack) - most servers didn’t support it even the client did support, xmpp didn’t win the acceptance of the end-users. It got some attention in the business world (cisco jabber) but not in the retail.

              Business cannot work forever without clients willing to pay or at least use, so it died off even in the business.

              End of story, try not to fighting with the straw men you created.

              • 0x0@programming.dev
                link
                fedilink
                arrow-up
                0
                ·
                16 days ago

                Of course, you’re right xmpp evolved to get PubSub extension as an “optional feature” but because of its availability (or rather lack) - most servers didn’t support it even the client did support, xmpp didn’t win the acceptance of the end-users. It got some attention in the business world (cisco jabber) but not in the retail.

                That XMPP’s extensibility is in itself a strength and a weakness is indeed a valid argument, as you’ve exemplified. I was expecting you’d criticize OMEMO though…

                Business cannot work forever without clients willing to pay or at least use, so it died off even in the business.

                No, it didn’t die off, it’s still used. IRC is still used as well, probably more or less at the same level. But if you define usage as “used in business” well then probably just a few cases, yes.

                I hadn’t heard of Cisco Jabber but i’ve heard of Google and Facebook - both companies’ messengers were, initially, based on XMPP but they EEE’d it once they got enough users and walled their gardens, dealing a major blow to the protocol.

                End of story, try not to fighting with the straw men you created.

                Can i fight my inner daemons at least? Please?

      • matcha_addict@lemy.lol
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        There’s a reason nobody uses it anymore.

        I and many others use it! And Google, meta, etc. Have used it but decided to lock it down.

        Yes you’re right, there’s a reason people don’t use it as much, which is because these corporations embraced it, dominated it, then extinguished it.

        But XMPP is honestly my favorite comm protocol and the most impressive imo.

      • leetnewb@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        I use xmpp. It happens to be a great fit for a private family messaging service. Good interoperability between modern clients. I get that “nobody uses it” is hyperbole, but the internet is a big place and there is room for services without mass market appeal to thrive.

    • state_electrician@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      For RSS I honestly don’t see a point, at least for me. What’s the use for having update feeds in a unified format when I still have to go to each fucking site to view the full text? I completely see the point of RSS when all I need is in the feed. But I hate going from different UI to different UI to get the full content. I want something like inoreader.com for self-hosting.

        • state_electrician@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          17 days ago

          I know that. But RSS is like 95% used for news feeds and that’s what I’m talking about. The way RSS is overwhelmingly used is making the whole thing useless (to me).

      • matcha_addict@lemy.lol
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        What’s the use for having update feeds in a unified format when I still have to go to each fucking site to view the full text

        This has nothing to do with RSS, it is the author’s choice. It’s like someone who posts links to their articles on Twitter / Facebook / Reddit, same thing. The platform doesn’t prevent you from putting the entire content there, and in fact, many do, especially with RSS.

        One benefit of RSS though is that because it is an open protocol, the problem you mention already has solutions, which auto fetch the articles for you. That wouldn’t be possible without an open protocol like RSS

        Moreover, I’d argue even with that, RSS is still a huge plus. To have all your content’s headlines in one UI, and potentially you can filter or sort them however you want, that’s pretty awesome.

      • Overwrite1@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        16 days ago

        Miniflux is likely to tick most of your boxes. It’s self hostable and can download the full article without extra clicks / having to visit the source.

      • milis@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        RSS works great for me though.

        I have an app on my not-so-smart phone to read news when commuting. It is not a long journey so I just want to have a quick glance at the headlines and read the actual articles that I want to. There are only 6 sites that I am interested, but still will take quite some work to crawl from the proper websites. RSS in turn is unified so I don’t need to worry about their website layouts, formats, etc. It also gives me an URL to the actual content which I can use readability/reader mode library to parse and further reduce unnecessary contents.

        Quite the opposite, I hope more informational sites offer/keep RSS! (Some removed RSS typically after a revamp, design change)

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    TOML instead of YAML or JSON for configuration.

    YAML is complex and has security concerns most people are not aware of.

    JSON works, but the block quoting and indenting is a lot of noise for a simple category key value format.

    • NostraDavid@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      15 days ago

      YAML is complex and has security concerns most people are not aware of.

      YAML is racist to Norwegians.

      If you have something like country: NO (NO = Norway), YAML will turn that into country: False. Why? Implicit casting. There are a bunch of truthy strings that’ll be cast automagically.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      TOML is not a very good format IMO. It’s fine for very simple config structures, but as soon as you have any level of nesting at all it becomes an unobvious mess. Worse than YAML even.

      What is this even?

      [[fruits]]
      name = "apple"
      
      [fruits.physical]
      color = "red"
      shape = "round"
      
      [[fruits.varieties]]
      name = "red delicious"
      
      [[fruits.varieties]]
      name = "granny smith"
      
      [[fruits]]
      name = "banana"
      
      [[fruits.varieties]]
      name = "plantain"
      

      That’s an example from the docs, and I have literally no idea what structure it makes. Compare to the JSON which is far more obvious:

      {
        "fruits": [
          {
            "name": "apple",
            "physical": {
              "color": "red",
              "shape": "round"
            },
            "varieties": [
              { "name": "red delicious" },
              { "name": "granny smith" }
            ]
          },
          {
            "name": "banana",
            "varieties": [
              { "name": "plantain" }
            ]
          }
        ]
      }
      

      The fact that they have to explain the structure by showing you the corresponding JSON says a lot.

      JSON5 is much better IMO. Unfortunately it isn’t as popular and doesn’t have as much ecosystem support.

      • ulterno@lemmy.kde.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        JSON5

        Nice. I mostly use Qt JSON and upon reading the spec, I see at least a few things I would want to have out of this, even when using it for machine-machine communication

      • Hawk@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        You’re using a purposely convoluted example from the spec. And I think it shows exactly how TOML is better than JSON for creating config files.

        The TOML file is a lot easier to scan than the hopelessly messy json file. The mix of indentation and symbols used in JSON really does not do well in bigger configuration files.

    • timbuck2themoon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      People bitch about YAML but for me it’s still the preferred one just because the others suck more.

      TOML like said is fine for simple things but as soon as you get a bit more complex it’s messy and unwieldy. And JSON is fine to operate on but for a config? It’s a mess. It’s harder to type and read for something like a config file.

      Heck, I’m not even sold on the S-expressions compared to yaml yet. But then, I deal with so much with all of these formats that I simply still prefer YAML for readability and ease of use (compared to the others.)

  • GamingChairModel@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.

    Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:

    But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.

    Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.

    • oldfart@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      15 days ago

      Don’t worry, in 5 or 10 years Google will develop an alternative and the rest of FAANG will back it. It will be super technically correct but will include a cryptographic signature that only big tech companies can issue.

    • The_Decryptor@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.

      I have never understood why you can delegate a subdomain but not the root domain, I doubt it was a technical issue because they added support for it recently via SVCB records (But maybe technical concerns were actually fixed in the decades since)