Our business-critical internal software suite was written in Pascal as a temporary solution and has been unmaintained for almost 20 years. It transmits cleartext usernames and passwords as the URI components of GET requests. They also use a single decade-old Excel file to store vital statistics. A key part of the workflow involves an Excel file with a macro that processes an HTML document from the clipboard.
I offered them a better solution, which was rejected because the downtime and the minimal training would be more costly than working around the current issues.
The library I worked for as a teen used to process off-site reservations by writing them to a text file, which was automatically e-faxed to all locations every odd day.
If you worked at not-the-main-location, you couldn’t do an off-site reservation, so on even days, you would print your list and fax it to the main site, who would re-enter it into the system.
This was 2005. And yes, it broke every month with an odd number of days.
cleartext usernames and passwords as the URI components of GET requests
I’m not an infrastructure person. If the receiving web server doesn’t log the URI, and supposing the communication is encrypted with TLS, which removes the credentials from the URI, are there security concerns?
What do you mean by any involved network infrastructure? The URI is encrypted by TLS, you would only see the host address/domain unless you had access to it after decryption on the server.
The comment we are replying to is asking about a situation where there is TLS. Also using clear text values in the URI itself does not mean there wouldn’t be TLS.
I would still not sleep well; other things might log URI’s to different unprotected places. Depending on how the software works, this might be client, but also middleware or proxy…
Even if the destination doesn’t log GET components, there could be corporate proxies that MITM that might log the URL. Corporate proxies usually present an internally trusted certificate to the client.
I feel your pain. Many good ideas that cause this are rejected. I have had ideas requiring one big downtime chunk rejected even though it reduces short but constant downtimes and mathematically the fix will pay for itself in a month easily.
Then the minimal retraining is frustrating when work environments and coworkers still pretend computers are some crazy device they’ve never seen before.
Places like that never learn their lesson until The Event™ happens. At my last place, The Event™ was a derecho that knocked out power for a few days, and then when it came back on, the SAN was all kinds of fucked. On top of that, we didn’t have backups for everything because they didn’t want to pay for more storage. They were losing like $100K+ every hour they were down.
The speed at which they approved all-new hardware inside a colocation facility after The Event™ was absolutely hilarious, I’d never seen anything approved that quickly.
Trust me, they’re going to keep putting it off until you have your own version of The Event™, and they’ll deny that they ever disregarded the risk of it happening in the first place, even though you have years’ worth of emails saying “If we don’t do X, Y will occur.” And when when Y occurs, they’ll scream “Oh my God, Y has occurred, no one could have ever foreseen this!”
Sounds like a universal experience for pretty much all fields of work.
Government and policy? Climate change? A fucking pandemic?!
We’ve seen it all happen time and time again. People in positions of authority get overconfident that if things are working right now, they’ll keep working indefinitely. And then despite being warned for decades, when things finally break, they’ll claim no one could have foreseen the consequences of their lack of responsibility. Some people will even chime in and begin theorising that surely, those that warned them, had to be responsible for all the chaos. It was an act of sabotage, and not of foresight.
Places I’m at usually end up bricking robots and causing tens of thousands of dollars of damage to them because they insist on running the robot without allowing small fixes.
Usually a big robot crash will be The Event that teaches people to respect early warning signs…for about 3 months. Then the old attitude slides back.
Good thing we aren’t building something that requires precision, like semi-conductor wafers. Oh wait.
As weird as it may seem, this might be a good argument in favor of Pascal. I despised learning it at uni, as it seems worthless, but is seems that it can still handle business-critical software for 20 years.
What OP didn’t tell you is that, due to its age, it’s running on an unpatched WinXP SP2 install and patching, upgrading to SP3, or to any newer Windows OS will break the software calls that version of Pascal relies upon.
You’re literally describing the system that controlled employee keyscan badges a couple of jobs ago…
That thing was fun to try and tie into the user disable/termination script that I wrote. I ended up having to just manipulate its DB tables manually in the script instead of going through an API that the software exposed, because it didn’t do that. Figuring out their fucked-up DB schema was an adventure on its own too.
I’m also describing the machine in my office that runs my $20,000 laser plotter/large format scanner. The software in the machine uses (Java?) over a web interface which was deprecated and removed from all browsers around 2012-14, iirc. The machine isn’t supported anymore and the only way to clear an error or update where it sends scans is using that interface. I have a XPSP2 machine running the internal IE6 browser which will still display the interface. Since I’m now a one-person office, and I use the scanner about 6 times a year, I keep that machine around in case I need to turn it on to update the scanner or clear a print error. Buying a new plotter isn’t worth the time/money - when it dies I’ll just farm out the work to a 3rd party vendor; but while it does work it’s convenient to have in-house.
If it’s that old, I’m betting it doesn’t use HTTPS for its connections. You could do a network packet capture on the XP machine (or if you can find one, hook it up to a network hub with another computer attached and capture there) while performing the “clear error” action and find out how it works/what you need to send to it to clear the error. You could also set up a SPAN port on a switch and mirror the traffic on the port going to the printer to capture the traffic, if you have a switch capable of doing that. If not, you can get one off Amazon for about $100.
It’d be pretty simple to put together a script that sends the “clear error” action to the printer after seeing how it’s done in the packet capture. I’ve done this numerous times, the latest of which was for a network-connected temperature sensor that I wanted to tie into but didn’t (publicly) expose an API of any kind.
It’s more than that, though - it’s used to setup custom sheet widths as well as enter new server and login details for sending scans via FTP to a server. If I’m doing billable work, I’m charging $225/hr. If I’m snooping the network, which isn’t my field and I do almost never so it takes me several times longer than an expert, I’m making nothing. With an annual value on the machine’s services at less than $500 (more than half of which would become reimbursable if I didn’t have it), there’s no actual value in “fixing” it by creating a different work around. 🤷♂️
Our business-critical internal software suite was written in Pascal as a temporary solution and has been unmaintained for almost 20 years. It transmits cleartext usernames and passwords as the URI components of GET requests. They also use a single decade-old Excel file to store vital statistics. A key part of the workflow involves an Excel file with a macro that processes an HTML document from the clipboard.
I offered them a better solution, which was rejected because the downtime and the minimal training would be more costly than working around the current issues.
The library I worked for as a teen used to process off-site reservations by writing them to a text file, which was automatically e-faxed to all locations every odd day.
If you worked at not-the-main-location, you couldn’t do an off-site reservation, so on even days, you would print your list and fax it to the main site, who would re-enter it into the system.
This was 2005. And yes, it broke every month with an odd number of days.
I’m not an infrastructure person. If the receiving web server doesn’t log the URI, and supposing the communication is encrypted with TLS, which removes the credentials from the URI, are there security concerns?
Anyone who has access to any involved network infrastructure can trace the cleartext communication and extract the credentials.
What do you mean by any involved network infrastructure? The URI is encrypted by TLS, you would only see the host address/domain unless you had access to it after decryption on the server.
They said clear text, I would assume it’s not https.
The comment we are replying to is asking about a situation where there is TLS. Also using clear text values in the URI itself does not mean there wouldn’t be TLS.
When someone just says cleartext, I assume they mean transmission too.
OP replied confirming HTTP: https://lemmy.world/comment/1033128
I’m not 100% on this but I think GET requests are logged by default.
POST requests, normally used for passwords, don’t get logged by default.
BUT the Uri would get logged would get logged on both, so if the URI contained @username:Password then it’s likely all there in the logs
Get and post requests are logged
The difference is that the logged get requests will also include any query params
GET /some/uri?user=Alpha&pass=bravo
While a post request will have those same params sent as part of a form body request. Those aren’t logged and so it would look like this
POST /some/uri
Nope, it’s bare-ass HTTP. The server software also connected to an LDAP server.
I don’t even let things communicate on /30 networks via HTTP/cleartext…this whole thing is horrifying.
I would still not sleep well; other things might log URI’s to different unprotected places. Depending on how the software works, this might be client, but also middleware or proxy…
I can practically guarantee you it was not
Browser history
Even if the destination doesn’t log GET components, there could be corporate proxies that MITM that might log the URL. Corporate proxies usually present an internally trusted certificate to the client.
I feel your pain. Many good ideas that cause this are rejected. I have had ideas requiring one big downtime chunk rejected even though it reduces short but constant downtimes and mathematically the fix will pay for itself in a month easily.
Then the minimal retraining is frustrating when work environments and coworkers still pretend computers are some crazy device they’ve never seen before.
Places like that never learn their lesson until The Event™ happens. At my last place, The Event™ was a derecho that knocked out power for a few days, and then when it came back on, the SAN was all kinds of fucked. On top of that, we didn’t have backups for everything because they didn’t want to pay for more storage. They were losing like $100K+ every hour they were down.
The speed at which they approved all-new hardware inside a colocation facility after The Event™ was absolutely hilarious, I’d never seen anything approved that quickly.
Trust me, they’re going to keep putting it off until you have your own version of The Event™, and they’ll deny that they ever disregarded the risk of it happening in the first place, even though you have years’ worth of emails saying “If we don’t do X, Y will occur.” And when when Y occurs, they’ll scream “Oh my God, Y has occurred, no one could have ever foreseen this!”
It’ll happen. Wait and watch.
Sounds like a universal experience for pretty much all fields of work.
Government and policy? Climate change? A fucking pandemic?!
We’ve seen it all happen time and time again. People in positions of authority get overconfident that if things are working right now, they’ll keep working indefinitely. And then despite being warned for decades, when things finally break, they’ll claim no one could have foreseen the consequences of their lack of responsibility. Some people will even chime in and begin theorising that surely, those that warned them, had to be responsible for all the chaos. It was an act of sabotage, and not of foresight.
Places I’m at usually end up bricking robots and causing tens of thousands of dollars of damage to them because they insist on running the robot without allowing small fixes.
Usually a big robot crash will be The Event that teaches people to respect early warning signs…for about 3 months. Then the old attitude slides back.
Good thing we aren’t building something that requires precision, like semi-conductor wafers. Oh wait.
That’s just be on them losing tons and tons of money from bad usable platter space lol they’re machine gunning themselves in the legs
As weird as it may seem, this might be a good argument in favor of Pascal. I despised learning it at uni, as it seems worthless, but is seems that it can still handle business-critical software for 20 years.
What OP didn’t tell you is that, due to its age, it’s running on an unpatched WinXP SP2 install and patching, upgrading to SP3, or to any newer Windows OS will break the software calls that version of Pascal relies upon.
You’re literally describing the system that controlled employee keyscan badges a couple of jobs ago…
That thing was fun to try and tie into the user disable/termination script that I wrote. I ended up having to just manipulate its DB tables manually in the script instead of going through an API that the software exposed, because it didn’t do that. Figuring out their fucked-up DB schema was an adventure on its own too.
I’m also describing the machine in my office that runs my $20,000 laser plotter/large format scanner. The software in the machine uses (Java?) over a web interface which was deprecated and removed from all browsers around 2012-14, iirc. The machine isn’t supported anymore and the only way to clear an error or update where it sends scans is using that interface. I have a XPSP2 machine running the internal IE6 browser which will still display the interface. Since I’m now a one-person office, and I use the scanner about 6 times a year, I keep that machine around in case I need to turn it on to update the scanner or clear a print error. Buying a new plotter isn’t worth the time/money - when it dies I’ll just farm out the work to a 3rd party vendor; but while it does work it’s convenient to have in-house.
If it’s that old, I’m betting it doesn’t use HTTPS for its connections. You could do a network packet capture on the XP machine (or if you can find one, hook it up to a network hub with another computer attached and capture there) while performing the “clear error” action and find out how it works/what you need to send to it to clear the error. You could also set up a SPAN port on a switch and mirror the traffic on the port going to the printer to capture the traffic, if you have a switch capable of doing that. If not, you can get one off Amazon for about $100.
It’d be pretty simple to put together a script that sends the “clear error” action to the printer after seeing how it’s done in the packet capture. I’ve done this numerous times, the latest of which was for a network-connected temperature sensor that I wanted to tie into but didn’t (publicly) expose an API of any kind.
It’s more than that, though - it’s used to setup custom sheet widths as well as enter new server and login details for sending scans via FTP to a server. If I’m doing billable work, I’m charging $225/hr. If I’m snooping the network, which isn’t my field and I do almost never so it takes me several times longer than an expert, I’m making nothing. With an annual value on the machine’s services at less than $500 (more than half of which would become reimbursable if I didn’t have it), there’s no actual value in “fixing” it by creating a different work around. 🤷♂️
Survivorship Bias
Anything can if you don’t update it.