Error during upgrade to 14.2.6

Getting the following error after uppgrade to 14.2.6:

If this container fails to start due to permission problems try to fix it by executing:

docker exec -it gitlab update-permissions

docker restart gitlab

Cleaning stale PIDs & sockets

Preparing services...

Starting services...

Configuring GitLab...

/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system

I’ve tried docker exec -it gitlab update-permissions with no success. It looks like the filesystem is read-only. Not sure how it got like that or how to fix it properly.


Can you make sure the host system has enough free disk space? After a quick google, people had similar problems with docker when the filesystem for the host system running docker had ran out of space. So just wanted to make sure your problem wasn’t because of this.

The filesystem has plenty of enough space, 91 GB to be exact.

Can you inspect the container, and see if somewhere it has something marked as read only:

docker inspect <id> | grep -i read

you can use the id of the container, or the name also. Sounds more like a docker problem than Gitlab to be honest, but cannot find anything concrete by searching right now.

Here’s the result of that command:

“OomScoreAdj”: 0,
“ReadonlyRootfs”: false,
“BlkioDeviceReadBps”: null,
“BlkioDeviceReadIOps”: null,
“ReadonlyPaths”: [

This is not a docker problem. I have 46 containers running on this host with no problems. This problem with the gitlab container started with the latest upgrade to 14.2.6. As a matter of fact, I had upgraded gitlab successfully all the way up to 14.1.8 from 12.9.5 using the upgrade paths here:

As soon as I did the upgrade from 14.1.8 to 14.2.6 this problem started.

Gitlab doesn’t control the filesystem, the filesystem is controlled by the docker container. But since the container isn’t showing it as read only, then there is obviously something wrong with the filesystem inside the docker volume.

Therefore, it is a docker problem. The problem started when you attempted the upgrade, which started changes inside the docker container. It could have also simply happened when using the container normally and making commits that got to a certain place on the filesystem where inconsistencies in the data have been found, causing it to be marked as read-only.

The docker volume/filesystem needs fixing I guess. If I find anything on google that will help you with that, I will post it here. Or maybe someone else will add some input if they had a similar issue.

I suggest also checking dmesg and /var/log/syslog or /var/log/messages on your docker host system to check for any disk/filesystem problems that might be at the docker host level. As I don’t know if your docker install uses block devices, or is it all stored under /var/lib/docker/volumes - which could mean the problems is potentially on the docker host if the filesystem has problems.

Just some ideas to check also, just in case it’s not totally inside the docker container itself.

the data for gitlab is stored on a separate drive /dev/sdb1 so here are the results of fsck for that disk:

fsck from util-linux 2.31.1
e2fsck 1.44.1 (24-Mar-2018)
/dev/sdb1: clean, 179981/10485760 files, 16040234/41942784 blocks

As you can see no problems.

No errors in /var/log/syslog either. This is looking like a gitlab code issue or some type of permission issue.

If it was Gitlab then others would also have the same problem and nobody has posted about it. Check all your log files for errors since right now there is nothing that allows anyone to help you resolve the problem.

How would you explain that I was able to upgrade from 12.9.5 all the way up to 14.1.8 without any problems and I soon as I tried to upgrade to 14.2.6 the problem started.

The problem we are having here is you are sending me on wild goose chases to check for non-existent problems with the host and you are not considering that this maybe a Gitlab issue and your reasoning is other people would have posted about it if it was an issue with Gitlab. How would you also explain the fact that I have 46 other containers using the same drive on the host and none of them are experiencing issues with read-only filesystems?

I’ve also searched around for that issue and found nothing before I posted here.

I have given you tips to solve your problems. If that is a wild goose chase, OK, fine, solve the problem yourself. I don’t really care anymore. Free support for you, and all you do is complain.

But remember, disk problems can be on certain blocks/sectors. Just because all your other containers work, doesn’t mean it’s not a problem with the disk since they can be in other blocks/sectors not affected. Maybe it’s not. Maybe it is. You don’t provide enough information anyway. Check your logs files or anything else and find the problem yourself. I’m not a psychic and cannot guess your problem with the lack of information posted by you. If it was a Gitlab problem it will be in the logs, so go check them as I already asked you to.

You post asking for assistance and ignore everything that people suggest you check to solve the problem.


Not agreeing with you is not complaining. Simply pointing out the flaw in your logic. You don’t have to like it but don’t accuse me of complaining.

I’ve provided you with everything you asked for. What exactly did I ignore?

You asked me to check the disk, I did and provided the output. Disk is clean, no bad block/sectors as per the fsck output. I’m not sure what else you want me to provide that the disk is not bad?

I checked the syslog and found nothing. I can’t provide anything that I don’t have. What other logs do you need?

If you don’t know the answer, then that’s fine. I’m good with that. Just don’t accuse me of stuff that I’m not doing. Cool?

Don’t accuse me of sending you on a wild goose chase then, when I gave up my free time to attempt to help you. If it was a problem with Gitlab 14.2.x then everyone upgrading to 14.2.x would have this problem. They don’t. Just you do. So this is a problem with your installation. Otherwise, there would already be posts on this forum about it. Which as you found, there are none. Something has gone wrong during your upgrade process, which may or may not be related to the upgrade process. You cannot say for 100% that Gitlab is at fault just because you were running the upgrade process. I could restart a service and have problems and assume it was because of that. But it might not restart for a hundred reasons. You would be right however if the upgrade process failed, aborted or whatever during the upgrade. But if that was the case, you would have had this error output. So that if it is the case, will be in the Gitlab log files. If you say that this doesn’t exist, then that means the Gitlab upgrade process completed successfully, and the side-effects you are experiencing is because of something else, and I ask questions to try and find out where the problem is. Only you have access to your system, I do not.

So your logic is flawed, not mine. You checked the disk, fine, no errors. You checked syslog, fine. No errors. Did you check the gitlab log files inside the container? Did you connect to the container using docker exec? I don’t know if you did this or not. You checked the system logs, I don’t know what else you checked.

Nobody can know the answer to your problem, if no information from log files or debugging is provided. Check the Gitlab docs for gitlab rake commands maybe here: Maintenance Rake tasks | GitLab or maybe search the docs for other potential rake commands or debugging. Maybe through this you can find information that you said doesn’t appear in the log files.

Debugging problems requires asking questions, even if you don’t agree with me that this is the source of the problem. I ask the questions to rule out that disk errors or lack of disk space is not the cause of the problem. It is not a wild goose chase. Like I said, you have access to the system I do not.

We are not getting anywhere with this discussion. I’ll go ahead and restore and try again. Thanks for your time.

Maybe the restore will help. You can try it. But yes, we won’t get anywhere in resolving your problem if you don’t answer the questions provided.

If you post here asking for questions, people helping you need answers to their questions.

I didn’t get an answer to this, so if you don’t help me help you, I cannot help solve your problem.

So if the problem occurs again after you restore, then I suggest that you debug your system properly as I asked in my last post, using the link I provided in my previous post with the gitlab rake commands for maintenance. Because that would have helped you provide me with information. But you didn’t do that, so if we aren’t getting anywhere, it is because you are not providing the information needed to help resolve the problem.

Good luck with your restore!