Matttermost, Inc.

[Solved] GitLab 8.8.9 -> 8.9.0 - Mattermost upgrade issue (invalid memory address or nil pointer)


Trying to upgrade from GitLab 8.8.9 to GitLab 8.9.0, which requires an update to Mattermost.

The update to mattermost seems to be failing on my end with an error that seems to indicate an invalid memory address or nil pointer?

Is there a way I can manually consolidate database users myself via DB updates since at least in my case all users are GitLab users and I can be sure that we can consolidate everyone.

Steps to reproduce

[root@localhost /]# su - mattermost
-sh-4.1$ bash
bash-4.1$ /opt/gitlab/embedded/bin/mattermost -config='/var/opt/gitlab/mattermost/config.json' -upgrade_db_30
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: interface conversion: interface is runtime.errorString, not string
[signal 0xb code=0x1 addr=0x0 pc=0x78090f]

goroutine 1 [running]:
panic(0xcd1aa0, 0xc820015e40)
	/usr/local/go/src/runtime/panic.go:481 +0x3e6
	/var/lib/jenkins/jobs/mattermost-platform-release/workspace/src/ +0x6f
panic(0xccbec0, 0xc82000e0e0)
	/usr/local/go/src/runtime/panic.go:443 +0x4e9
	/var/lib/jenkins/jobs/mattermost-platform-release/workspace/src/ +0x9f, 0x26)
	/var/lib/jenkins/jobs/mattermost-platform-release/workspace/src/ +0x10aa
main.doLoadConfig(0x7ffca76a7851, 0x26, 0x0, 0x0)
	/var/lib/jenkins/jobs/mattermost-platform-release/workspace/src/ +0x6a
	/var/lib/jenkins/jobs/mattermost-platform-release/workspace/src/ +0x43

Think I solved this… I disabled the email notifications since I am testing the upgrade on another Virtual Machine and it looks like this since the mail server was part of the configuration it was failing on attempting to connect to it.

After conducting the test, it does seem like lots of users have their data modified in such a way that removes their GitLab authentication references which means that these users can not access their teams any longer since login via email is disabled and I only allow authentication via GitLab.

It looks like I’ll have to write my own migration script to handle this more effectively and not to put a burdon on end users during this transition.