Apparently they were doing a scheduled maintenance to increase storage space, though upon doing so various errors have occurred, missing images and upload errors, so they had to extend maintenance to fix that issue
The web server that serves up the pages your browser shows is attached to a file server that holds all the stuff posted to FA. There is a "pointer" on the web side that points to the files on the other server and the combination creates the pages you see with the thumbnails, etc., of the files pointed at. You click on one of those and you get the file it was pointing to.
There was a recent increase in the amount of files that needed to get stored, so the staff took some downtime to expand the storage capacity. Unfortunately, the software doing the "pointing" to those has a limit on how many things it can point to. When the new storage expanded, it exceeded that top limit of the pointer. So, unfortunately, the pointer just "wrapped around" its upper limit and started back in from opposite side with negative numbers working back up toward zero.
This confused the heck out of the rest of the web server, trying to serve up "negative" material, and things broke down. Some images showed up missing and other faults happened during uploads. It looked like files were getting lost or corrupted in the recently upgraded storage area.
Naturally, ferreting this out took a LOT of troubleshooting because the common faults like corrupted files and software bugs were all showing up as "everything's great!" All the data was there, but the "sign post" aiming at it was pointing off into fantasy land and that "pointer" is a really rare sort of fault, so they went looking for the more common things first.
When the staff figured things out, they had to revert everything back to something that was previously working within the pointer's limits. And they had to run checks to make sure none of those new or legacy files had gotten any corruption while things were wandering around off the edge of the map.
That means checking the file continuity and integrity of every file. If you've ever done a "disk scan" you know that can take a while. Now imagine a scanning a "disk" the size of FA?
So, the staff has everything more or less back to where they started. The checksum stuff and other things are still going, so they've locked uploads to keep from adding to the issues. They're going to upgrade the needed apps to make "pointing" work with bigger storage and update the storage capacity again when all of that gets finished. That might be a week or so from now (told ya the scan on files FA's size takes a LOOOOONNNNGGGG time!).
End result is FA will have a more robust file architecture and lots more file capacity.
(And now Vrghr's going to get jumped by real admins tearing wuff's fur out over how many errors wuff made in this explanation! *grin*)
The web server that serves up the pages your browser shows is attached to a file server that holds all the stuff posted to FA. There is a "pointer" on the web side that points to the files on the other server and the combination creates the pages you see with the thumbnails, etc., of the files pointed at. You click on one of those and you get the file it was pointing to.
There was a recent increase in the amount of files that needed to get stored, so the staff took some downtime to expand the storage capacity. Unfortunately, the software doing the "pointing" to those has a limit on how many things it can point to. When the new storage expanded, it exceeded that top limit of the pointer. So, unfortunately, the pointer just "wrapped around" its upper limit and started back in from opposite side with negative numbers working back up toward zero.
This confused the heck out of the rest of the web server, trying to serve up "negative" material, and things broke down. Some images showed up missing and other faults happened during uploads. It looked like files were getting lost or corrupted in the recently upgraded storage area.
Naturally, ferreting this out took a LOT of troubleshooting because the common faults like corrupted files and software bugs were all showing up as "everything's great!" All the data was there, but the "sign post" aiming at it was pointing off into fantasy land and that "pointer" is a really rare sort of fault, so they went looking for the more common things first.
When the staff figured things out, they had to revert everything back to something that was previously working within the pointer's limits. And they had to run checks to make sure none of those new or legacy files had gotten any corruption while things were wandering around off the edge of the map.
That means checking the file continuity and integrity of every file. If you've ever done a "disk scan" you know that can take a while. Now imagine a scanning a "disk" the size of FA?
So, the staff has everything more or less back to where they started. The checksum stuff and other things are still going, so they've locked uploads to keep from adding to the issues. They're going to upgrade the needed apps to make "pointing" work with bigger storage and update the storage capacity again when all of that gets finished. That might be a week or so from now (told ya the scan on files FA's size takes a LOOOOONNNNGGGG time!).
End result is FA will have a more robust file architecture and lots more file capacity.
(And now Vrghr's going to get jumped by real admins tearing wuff's fur out over how many errors wuff made in this explanation! *grin*)