I use Github to keep track of my code, and have previously used "Import Script from Github" to load my scripts onto this site, with a webhook to keep them synced. This has worked fine in the past, but recently when I have pushed scripts to my master branch on Github, the .meta.js file has been pushed to this site, rather than the .user.js file. I've had to manually cut'n'paste everything into the Source Code edit box to get them working again - which rather defeats the point of syncing with Github...

Any idea what has changed to make this happen, or what I might be doing wrong?

I have multiple userscripts in separate folders in the same repo. Each script has a meta file as well as the main code, and some have other .js modules as well. I don't seem to be getting random scripts synchronised - but the site seems to be either getting the metadata rather than the userscript file, or getting the correct file but truncating it after the header block, I can't tell which.

Hmmm... webhook says it tried to deliver the .meta.js from GH. Interesting little glitch. Will try a patch in a bit.

However before I go twiddle with that... the GH cache timing is an issue e.g. nothing that we can do about it. Slowing down fast commits to GH is required by them in their caching scheme otherwise they serve old source regardless of what is showing in HTML view.

Give it a whirl now with the patch. Seems to work with your custom style of .meta.js being generated offsite. Let me know if you have any other issues.... although this issue should have been on Development for speedier response times. Good thing I checked here this evening otherwise you may have had a long wait time. :)

Hi Marti,

I was fighting with the correctly formatted license header the other day. Now I found a way to pacify those warnings, but instead the import from GH just responds with (translated to English):

Page-Load-Failure
Error: secured connection failed
Connection to server reset while loading the page.
- The website cannot be displayed, as authenticity cannot be validated.
- Contact the owner of the website to inform them of the problem

further information: https://support.mozilla.org/kb/what-does-your-connection-is-not-secure-mean
O Report errors to Mozilla to help identify and block fraudulent websites
try again

I tried chromium and rekonq but did not succeed either.

My repository is at:
https://github.com/hklene/FaceCards
If you could please have a look?

Also uploading the file directly from hdd or pasting the script contents into the editor here did not work for me.

I even tried to publish my headers with a single hello-world line, but could not get anything through.

Re: @hklene:

Now I found a way to pacify those warnings

Probably in relation to Licensing enforcement (comment).

If you could please have a look?

That server is returning something bizarre for the url at the @icon check.

See #1323.

I have temporarily suspended the automatic @icon check... this does still mean that if we see abuse with huge @icon values above and beyond the 256x256 resolution the default TOS action will apply... e.g. one risks their script and account termination. I put in the rejection so it didn't have to resort to this.

Thanks for the bug report... next time if you notice you are tripping the server please post on Development so it can be addressed quicker. Your current @icon fails to load client side as well.

The icon requires authentication with my companies intra-net it seems. I replaced it with a data-url and could now successfully load the script.

Thanks for the immediate solution ... you're really quick!

BTW: You can re-enable the check, now that I've learned what caused the problem. Maybe you can add some error-message to point the user to the broken @icon URL?

Re: @hklene:

The icon requires authentication with my companies intra-net it seems.

A-ha... this is what @sizzle is talking about over on Development. :)

You can re-enable the check, now that I've learned what caused the problem.

Can't do that until the EPROTO status response gets trapped fully... it doesn't seem to want to work with on('error'... routine. I made one mistake but correcting it changes it to a similar error using the request package (module).

I will tinker some more to see if restoration of rejection can happen instead of Admin+ duties of script and/or account eligibly for termination/removal. I made this rejection so I wouldn't have to kick someone off if they didn't respond to an issue created. I'm usually very proactive when it comes to these things and I prefer not to flex the will unless I have to. :)

Re: @hklene:

You can re-enable the check

Found the culprit. So it is reenabled for now. Thanks again for the report. If you do decide to use that url again it will come up with (in a text represented version) of:

EPROTO
write EPROTO 139646336411456:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:1500:SSL alert number 40 139646336411456:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:659:

... this means there was a protocol failure retrieving the resource.

I've returned a real 400 status code because usually this is a user error but you will only see it something like Live HTTP headers instead of the page itself.

Also discovered if a domain is invalid that will throw a similar error. Everyone will see something like:

ENOTFOUND
getaddrinfo ENOTFOUND s33333333333333333333333.amazonaws.com s33333333333333333333333.amazonaws.com:80

... this means the resource wasn't found at that domain (twice shown for some reason but I don't handle node development for error messages).

I realize that these strings and messages aren't totally helpful in plain, human terms, but I don't think we're going to do a dictionary lookup of available codes that node, and other dependencies, put out and cross reference them to valid http status codes.