• kevincox@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    6 months ago

    What are you smoking? Shallow clones don’t modify commit hashes.

    The only thing that you lose is history, but that usually isn’t a big deal.

    --filter=blob:none probably also won’t help too much here since the problem with node_modules is more about millions of individual files rather than large files (although both can be annoying).

    • flying_sheep@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      From github’s blog:

      git clone --depth=1  creates a shallow clone. These clones truncate the commit history to reduce the clone size. This creates some unexpected behavior issues, limiting which Git commands are possible. These clones also put undue stress on later fetches, so they are strongly discouraged for developer use. They are helpful for some build environments where the repository will be deleted after a single build.

      Maybe the hashes aren’t different, but the important part is that comparisons beyond the fetched depth don’t work: git can’t know if a shallowly cloned repo has a common ancestor with some given commit outside the range, e.g. a tag.

      Blobless clones don’t have that limitation. Git will download a hash+path for each file, but it won’t download the contents, so it still takes much less space and time.

      If you want to skip all file data without any limitations, you can do git clone --filter=tree:0 which doesn’t even download the metadata

      • kevincox@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Yes, if you ask about a tag on a commit that you don’t have git won’t know about it. You would need to download that history. You also can’t in general say “commit A doesn’t contain commit B” as you don’t know all of the parents.

        You are completely right that --depth=1 will omit some data. That is sort of the point but it does have some downsides. Filters also omit some data but often the data will be fetched on demand which can be useful. (But will also cause other issues like blame taking ridiculous amounts of time.)

        Neither option is wrong, they just have different tradeoffs.