Let me tell you something. I cannot tell you what company, but I have been tasked with putting Excel files in git “because they are just zip archives with xml” and it is just a disaster. Everytime you save the document it will save certain parts of the xml code in arbitrary ways (like each image is in a list and the order of that list is random everytime), some metadata is re-written everytime like time of last modified and finally all the xml files are one single line.
The git diffs are complete useless and noisy and just looking at the Excel file will cause git to consider it updated. So sure, you can use git to snapshot you Office documents… But just don’t.
If you are, like I once was, the poor fool who has to maintain a bunch of VBA macros… Extract them into files and source control those. Make a script to extract them and to put them back, and use git-lfs for the actual workbook if you need a template workbook.
Now pardon me, I need to add this to the agenda for my next therapy.
I will join that therapy session. This is pretty much what we did, except LFS, since it was “a requirement” to also track what they layouting of the Excel file was like.
And even extracting and inserting the code was not stable. Excel will arbitrarily change the casing of “.path” to “.Path” for no reason and add and remove whitespace between functions as it see fit. It was such a pain. We also had a hard time handling unicode strings for instance containing a degree sign. And the list goes on.
Perhaps M$ does that specifically to make it hard to work with their formats? That way, tools like libre office stay not 100% compatible, preserving their market share.
I hear ya. But to be honest, what they are doing here is fine, and doesn’t seem malicious. There is an Open Document specification and they stick to it, but the spec doesn’t enforce everything. For instance for the ordering of certain elements on the page, I bet you they store store those elements in memory in an efficient data structure where ordering doesn’t matter, so when writing out the memory to disk, the easiest for them to do is just write it out in what order it appears in their data structure.
But there are probably other cases where they are not so innocent.
Just fork git to handle zipping, formatting and ignoring metadata! Or just put your office document in the cloud and use the basic versioning it provides.
I would genuinely rather use git in such a scenario than not because there are plenty of other useful features over a bunch of files in a folder. Sure, obviously if the file is massive it is inconvenient, but that’s not a fair comparison because we’re comparing multiple copies “FINAL FINAL FOR REAL” in a folder anyways. There isn’t suddenly less size that way. It seems incredibly silly to describe it as “keeping files with extra steps” because people aren’t using git for space saving, they’re using it for version tracking. Everything git does is “keeping files with extra steps.”
Everything git does is “keeping files with extra steps.”
Not quite, because text files are stored as incremental diffs, which not only saves massive amounts of space but allows for effective comparisons of what exactly has changed between versions. While the former is more of a nice bonus these days with storage being extremely cheap, the latter is in fact the main reason one would use git to begin with.
Yes but without the ability to quickly see what’s changed between different versions (on a semantic level), all it will do for you is safe you some storage.
With a bunch of separate files, you can at least open two of them quickly and do a manual scan, but with git you can only ever have one version checked out at the same time, so now you’ll be checking out an older version, making a temporary copy of that, and then checking out the version you want to compare it to and STILL end up doing just that.
From a workflow perspective, it’s really just extra overhead, with little to no practical benefit.
With a bunch of separate files, you can at least open two of them quickly and do a manual scan, but with git you can only ever have one version checked out at the same time, so now you’ll be checking out an older version, making a temporary copy of that, and then checking out the version you want to compare it to and STILL end up doing just that.
What? I don’t understand what are you trying to say. Are you trying to do manual scan of xml inside? It’s useless, internal format is not intended to be human-readable. But you can use regular git diff anyway.
Or if you want to compare rendered documents, then you probably need to make git diff driver. Or checkout multiple worktrees and use libreoffice’s comparasion.
I don’t want to engage in this conversation if you’re going to ignore everything else I said about how binary files since that what were talking about.
Someone could probably build a tool which sits in between you and Git, which unzips the file before committing and after pulling, so Git sees the raw xml file, but you always see the zipped docx.
edit: never mind. Just read @petersr@lemmy.world’s comment explaining why this is a bad idea.
docx files are actually zip archives with xml in them
Let me tell you something. I cannot tell you what company, but I have been tasked with putting Excel files in git “because they are just zip archives with xml” and it is just a disaster. Everytime you save the document it will save certain parts of the xml code in arbitrary ways (like each image is in a list and the order of that list is random everytime), some metadata is re-written everytime like time of last modified and finally all the xml files are one single line. The git diffs are complete useless and noisy and just looking at the Excel file will cause git to consider it updated. So sure, you can use git to snapshot you Office documents… But just don’t.
If you are, like I once was, the poor fool who has to maintain a bunch of VBA macros… Extract them into files and source control those. Make a script to extract them and to put them back, and use git-lfs for the actual workbook if you need a template workbook.
Now pardon me, I need to add this to the agenda for my next therapy.
I will join that therapy session. This is pretty much what we did, except LFS, since it was “a requirement” to also track what they layouting of the Excel file was like.
And even extracting and inserting the code was not stable. Excel will arbitrarily change the casing of “.path” to “.Path” for no reason and add and remove whitespace between functions as it see fit. It was such a pain. We also had a hard time handling unicode strings for instance containing a degree sign. And the list goes on.
Perhaps M$ does that specifically to make it hard to work with their formats? That way, tools like libre office stay not 100% compatible, preserving their market share.
I hear ya. But to be honest, what they are doing here is fine, and doesn’t seem malicious. There is an Open Document specification and they stick to it, but the spec doesn’t enforce everything. For instance for the ordering of certain elements on the page, I bet you they store store those elements in memory in an efficient data structure where ordering doesn’t matter, so when writing out the memory to disk, the easiest for them to do is just write it out in what order it appears in their data structure.
But there are probably other cases where they are not so innocent.
Just fork git to handle zipping, formatting and ignoring metadata! Or just put your office document in the cloud and use the basic versioning it provides.
Doesn’t matter, to git they are still binary files, which means it’ll check in each revision as an entirely new copy.
Yes, you might only see the most recent one in your working directory, but under the hood, all the other ones are still there in the repo.
Which isn’t any different than keeping them as separate files space wise so what’s the problem?
(Other than Word having built-in versioning.)
It’s basically just keeping a bunch of separate files but with extra steps.
I would genuinely rather use git in such a scenario than not because there are plenty of other useful features over a bunch of files in a folder. Sure, obviously if the file is massive it is inconvenient, but that’s not a fair comparison because we’re comparing multiple copies “FINAL FINAL FOR REAL” in a folder anyways. There isn’t suddenly less size that way. It seems incredibly silly to describe it as “keeping files with extra steps” because people aren’t using git for space saving, they’re using it for version tracking. Everything git does is “keeping files with extra steps.”
Not quite, because text files are stored as incremental diffs, which not only saves massive amounts of space but allows for effective comparisons of what exactly has changed between versions. While the former is more of a nice bonus these days with storage being extremely cheap, the latter is in fact the main reason one would use git to begin with.
Binary files too can be stored as incremental diffs
Yes but without the ability to quickly see what’s changed between different versions (on a semantic level), all it will do for you is safe you some storage.
With a bunch of separate files, you can at least open two of them quickly and do a manual scan, but with git you can only ever have one version checked out at the same time, so now you’ll be checking out an older version, making a temporary copy of that, and then checking out the version you want to compare it to and STILL end up doing just that.
From a workflow perspective, it’s really just extra overhead, with little to no practical benefit.
What? I don’t understand what are you trying to say. Are you trying to do manual scan of xml inside? It’s useless, internal format is not intended to be human-readable. But you can use regular git diff anyway.
Or if you want to compare rendered documents, then you probably need to make git diff driver. Or checkout multiple worktrees and use libreoffice’s comparasion.
I don’t want to engage in this conversation if you’re going to ignore everything else I said about how binary files since that what were talking about.
Sorry, I just woke up and clearly didn’t parse your entire comment correctly. Should have had my coffee first.
❤️ no worries, I get it. I’ve done the same lol
Someone could probably build a tool which sits in between you and Git, which unzips the file before committing and after pulling, so Git sees the raw xml file, but you always see the zipped docx.
edit: never mind. Just read @petersr@lemmy.world’s comment explaining why this is a bad idea.
Yeah, I made such a tool - and kept polishing edge cases until I gave up. So just wanted to warn everyone.
I’m sure you could, but yes, it’s likely not worth the trouble.
https://git-scm.com/book/en/v2/Customizing-Git-Git-Attributes#filters_a
a pre hook filter that beautifies and sanitizes the xml should fix that
I think you can write clean/smudge filter that will turn docx into tree(folder)
You probably can but here’s why that’s still not gonna be all that effective.
Still better than having 30 copies of same document and forgetting which was the last one.
Sort by -> Date modified