I hate to break it to you, but if someone is not that computer-savvy, they really shouldn't be getting a system with dual graphics. They are known to be more maintenance intensive (not in the short term, but long term as you already noticed). This can be made easier if the notebook vendor would show some cooperation, but that obviously isn't the case with HP. Ironically, even in this case having a notebook with just an Intel IGP would be easier to maintain, and you could probably still run the latest GL code (albeit probably slower than a single discrete GPU solution) because the driver would be up-to-date.
Sorry to say this, but you're out of luck. We know what the problem is, but the assumption that context sharing works properly is so deeply rooted into the SFML code that it isn't feasible to code a workaround just for people who have this problem, even if they are not that small of a user base.
What you can do however, in your own code, is prevent this case from happening. Now that you understand what triggers the crash, you can take steps to avoid getting into this situation. As should be obvious from the example, make sure you always only have one context active, from the start to the end of the application. The first thing you should do is create the window and set its context active. Do not create new contexts or deactivate it after that. This means, no access to anything graphics related in secondary threads as well. The last thing your application should do is let the window go out of scope after all other resources have already been destroyed. Do not manually close it, it will close itself on destruction.
If you follow those instructions, chances are good that you will end up only working in a single context. I already do this myself, because I constantly work with unsharable resources like VAOs, and this strategy works for me. You have to be very strict on yourself, but if that is the only option you have, I don't think there is anything else you can do.