We don’t store big artifacts inside gitlab either.
I think you are answering my question, which is more clearly put as “when the field of battle gets messy, how do you stay organized?”. It is a concern of the build task in gitlab CI to create and name its artifacts, and then execute whatever command pushes the large binary artifacts out. Probably it will push data to a storage system, as simple perhaps as a big network file server inside a company, for us.
You chose artifactory as your store? What attributes does it have that makes it a good choice for you?
Gitlab has the ability to store artifacts, but is a poor choice for binary artifacts for us. For certain types of artifacts, say java jars, or .net assemblies, or ruby gems there are already established patterns that people use. When it comes to C++ DLLs/shared-libraries, and so on, there are fewer and fewer common practices, and it seems each company/org/project does very different things.
It seems to me that having all artifacts available on an LDAP-authenticated HTTP/WEBDAV store with a directory structure and a versioning system will be required. An unsolved problem for me would be to mark assets that have become important, that someone/something depends on, and retain them, but to allow assets which have not become important to expire, so that not every commit of every repo produces a large amount of useless “noise assets”.
Our history as a software development organization has moved from monolithic builds (check out 6 gigs of stuff and run a giant ANT-style XML build that takes 6 hours) where CI is truly impossible (the minimal build interval was six hours) to a micro-approach. We’re still looking for some of the tooling bits to make this transition.