Why are artifacts uploaded (but after_script not run) when job fails to completely start?

I have a job that is the process of starting when it encounters a failure – prior to the litany of Removing ... pre-run cleanup:

Getting source from Git repository
Fetching changes...
Reinitialized existing Git repository in /builds/xxxxx/.git/
error: RPC failed; curl 18 Transferred a partial file
fatal: expected flush after ref listing
Uploading artifacts for failed job
Uploading artifacts...
tests*: found 1150 matching artifact files and directories 
Uploading artifacts as "archive" to coordinator... 201 Created  id=xxxx responseStatus=201 Created token=xxx

Why would the artifacts be uploaded in this situation? Shouldn’t the whole job abort? Isn’t this allowing [potentially sensitive] information to bleed from an earlier job?

Why wouldn’t the after_script run?