Dragonlaird

quote author=perkiset link=topic=38.msg1051#msg1051 date=1178659875
As I mention in the

Ajax

  over SSL in IE 6 thread, this covers a small shitty where the requestor refuses to send out the packet


A few insights I've discovered whilst developing my own, cross-broswer

AJAX

  object, sorry but I'm not posting the code guys - it's getting HUGE and spread over several files but...

The 12030 'bug' is not just restricted to SSL! It seems to occur whenever a port number is used in the URL. I can't confirm if this includes port 80 as to be honest, I haven't checked, but I can confirm it always heppens when I use a FULL URL complete with any non-standard port number, even if it is the same port as the host website.

A little pre-amble to help explain what I've discovered before offering a few tips to help get around the problem.

I found this same error when using the MS '

ASP

 

.Net

  Development Server' on my local PC which fires up to load a site for testing/debugging purposes in programs such as VWD etc.

When this Dev Server loads, it uses weird and wonderful port numbers so as not to conflict with other ports already in use on the local PC (e.g. Port 80 is often assigned to Personal Web Server).

In my

AJAX

  routines, I have a function which always converts the relative URL to the full, absolute URL. Simply put, this is a BIG mistake.

Use RELATIVE URLs people!

If your site is using Port 80 (or any other port for that matter) - and the URL you're posting to is in the same site, on the same port - don't try and get clever by converting the URL of the

AJAX

  component to the full URL (complete with 'http://MySite/...').

If your site *must* use a port other than port 80, load the whole site using that port before you start using

AJAX

 , then continue to use relative addressing for your all your

AJAX

  URLs.

Don't try and load an SSL page (or any port number - even the SAME port as the website location) with an absolute URL via

AJAX

  - It will die horribly in IE6+ with the dreaded 12030 error.

So, as a quick example of good/bad URLs to use:

If your base website address is:
http://localhost:1765
And you want to access, via

AJAX

 , the page:
http://localhost:1765/Content/Default.htm
Do NOT use the above URL in your

AJAX

  component, convert it to a relative address like:
/Content/Default.htm
Or:
Content/Default.htm
(The latter if you are calling the page from the ROOT of the site).

This causes a slight problem for everyone who's website intends to pass info over SSL when their base website uses Port 80. However, you *might* find it will work if you *always* load the base URL using 'https' instead of 'http'.

Not a perfect answer I admit, but hey - we're here to help each other so try it guys and let others know what you find!

perkiset

Great post Dragon... a couple adds -

I found inconsistency and even flat out failure with FF, Safari as well as IE when using fully qualified URLs - I never use them. I don't use relative URLs, I still use hard locations (ie., /adir/adir/afile.

php

  as opposed to adir/adir/afile.

php

  - note missing first slash which defines it as relative). The port number also is a problem, because by

AJAX

  definition you are not allowed to change port, URL, subdomain, or even protocol... the call must go directly to the <i>exact same place it came from</i> or it is out of standard. Interestingly, it seems that in IE6 you can sometimes get away with it - this is actually a bug.

So what's a poor

ajax

 xer to do?

I'm working on an iFrame remote scripting call/receive class that mimics

AJAX

 , but is much more stable do to the more widely accepted standards of iFrames and standard HTML. Additionally, this has the benefit of being able to grab from any other port or protocol... or even to another domain, which in a large application I am working on, this is critical. Personally, I am moving back towards the notion of simple "Remote Scripting" than really

ajax

 , since I don't always use XML as my communication syntax/structure, and if I blend in iFrame remote procedure calls instead of using the XMLHTTPRequest then, of course, I am heretical.

I'll be posting it in the next many days when I get it done - I think it will spin your gears a bit.

/p

perkiset

<slightHijack>
Wait now...

Am I being incredibly stupid? Isn't the notion of a mashup a combination of things from different websites? If I call for

javascript

  from another domain, can't functions in that

javascript

  call home to that domain? Isn't this exactly how the Google Maps API works? Suddenly feeling like a big cool train has been driving by me and I haven't been hearing it...

</slightHijack>

nutballs

YES. the "remote" domain has to have functions that do what you want of course... simple example. I can have a cool crazy flyout menu on my site, without hosting the JS... i know what your thinking...

perkiset

Oh yeah... stayed up late on the notebook and demo'd it. How'd I fishing miss that?!?!?! Given the SSL/12030 success I've had, I'm thinking hard this morning about where I'm going with ... that... let you know prolly later today or tomorrow.

/p

artur.chyzy

We would like to contribute some more info with regard to 12030, 12151, 12031 errors in IE6. We  have got these problems in our application and went almost mad due to the problems.



The solution which we found is to force HTTP protocol version to be downgraded to 1.0 from default 1.1.

It seems that our beloved, commonly known and highly qualified vendor of Inte

rnet

  Explorer, has implemented HTTP 1.1  vary badly.

In our case this workaround has helped and we don't get the errors any more.



This can be done in

Apache

  configuration:

SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0



Let us know if it worked for you.



Artur

nutballs

what would be the downside to doing that? For the most part, that seems like a reasonable, albeit stupid to have to do it, solution.

perkiset

Welcome Artur -

Thats a really novel solve and I can't seen any downside really... it would be very easy to set the environment to 1.0, then rewrite the URL into a handler if it is an

AJAX

  call, then set the environment back to 1.1 right after the rewrite if it even gave you trouble... although I am not sure what trouble it would give.

Artur did you notice any other symptoms or side-affects of downgrading the HTML version?

BTW - great work and really creative. Never would have even thought of that or tried it. Well done.

/p

artur.chyzy

Well tkanks.
But this not all my job.
I was using wireshark to track the tcp packets.
The were some problems with SSL connection on packets level (wrong checksum)
My friend found on some site that there are some problems with HTTP 1.1 with IE.
We decided to try do downgrade it do 1.0
What was my suprise when this actually worked (so remember not to fight with IE alone Applause )

Also the problem is does not apper on IE7 .. probably MS reimplemented SSL

artur.chyzy

Oh,, and also the downside.
I'm not an expert but this would be only downside beetween HTTP 1.1 and 1.0
No keep alive
Each image, js or sth else need to be downloaded by browser in separated connection
So this would be a litte bit slower but not so much (i didn't do any performance tests)
But also on pages which shows config for

apache

  and ssl they say to downgrade MSIE to 1.0 because od bug in the IE
Even if this would work correctly other things would not
So for me downgrade SSL on IE to HTTP 1.0 is the default

kidplug

I just came across this thread while looking for info on the SSL errors (12152 etc) in IE6 using

ajax

  calls.

I built a retry into my

ajax

  routine in order to deal with these, and everything works fine on the client side.

However, I have recently noticed that I am getting alot of server side errors which I believe are related.

It looks like when the client gets one of these errors and then immediatley retries - my server actually does get both requests.
The first request is the "bad" one, and the second one is handled successfully.

Five minutes later tomcat gets a "Short Read" exception in the thread handling the first request.

So, it would seem that the original failed request has been tying up a resource on my server for five minutes.

My question is - would calling abort() on the xhr before doing the retry sever the connection to the server?
I am going to try this, but I wondered if any one else has observed the same thing.

Thanks.

perkiset

I don't think it would hurt at all, and certainly is a good protocol.

Interesting that TomCat saw the request - in

PHP

  I don't get one. I'm wondering if

Apache

  sends things more immediately to TomCat (like the moment a header or partial packet comes in) rather than bundling the entire mess up and shipping it off to the

PHP

  instance as one complete packet... in other words, if IE doesn't send it all, then

Apache

  hears it but not

PHP

 . ( QQ: Are you going straight into Java or using JBoss or something on the back end under TomCat? And I'm assuming that programmatically you never see it, you just see the logged error? )

Then it would be holding up handles on the server, you're correct.

So an abort() would be a good option because presumably

Apache

  would hear *that* - but since the communcation layer seems to have been disrupted, I'm wondering if <i>anything</i> further would get through to

Apache

 , even the packet-layer notification of an abort...

I've posted my personal code in the Code Repository, under "

Ajax

  Requestor Class" - the solve I got for this was to setTimeout to fire the request - so I delay about 10ms before firing it off, which completely eliminated my troubles. It might be worth a look for you.

Thanks, and welcome to The Cache BTW!
/p

StephenBauer

Doesn't using HTTP 1.0 yield no host headers?  If I recall, it does not.

May need to account for the lack of a host header if you are doing any virtual domain hosting or re-writing based on host/domain name.

SB

StephenBauer

quote author=perkiset link=topic=249.msg1628#msg1628 date=1179726232

<slightHijack>
Wait now...

Am I being incredibly stupid? Isn't the notion of a mashup a combination of things from different websites? If I call for

javascript

  from another domain, can't functions in that

javascript

  call home to that domain? Isn't this exactly how the Google Maps API works? Suddenly feeling like a big cool train has been driving by me and I haven't been hearing it...

</slightHijack>


Everything is sub-domained at Google for the most part.  Applause

SB

kidplug

Thanks for the reply, perk.

I am running

apache

  in front of tomcat - and

apache

  actually doesnt log anything until five minutes later, presumably when tomcat has errored out and given its response back to

apache

 .  That is probalby a function of the logging level -

apache

  is probably capable of logging something immediately.

My servlet in tomcat is getting called - the doPost() method that is, since these all seem to occur for me on Post.
So my servlet code is being executed.  The exception occurs when actually trying to read the incoming request object.  It sits there for five minutes then throws the exception.

Interestingly, I have found that these errors are occuring (not as often) on non-

ajax

  requests as well - meaning on other "regular" post requests from the browser.  IE itself is probably handling the retry in those cases.  I guess this IE6 SSL / keep-alive bug is at the heart of this problem.

My retry call is also done (like yours) via a settimeout() and it does work fine on the client.

I'm trying to reproduce the problem right now so i can test if the abort() helps, and of course the problem wont happen for me!  Applause




perkiset

LOL that's been the largest issue with this problem from the start it seems... completely unreliable. I have yet to be able to *reliably* reproduce it at will... I have to run dozens or even hundreds of tests before I get something good enough to call a pattern.

That's very interesting about the POST and it happening on other requests as well... I'll look forward to hearing your results.

/p

perkiset

quote author=StephenBauer link=topic=249.msg2281#msg2281 date=1182360686

Doesn't using HTTP 1.0 yield no host headers?  If I recall, it does not.

May need to account for the lack of a host header if you are doing any virtual domain hosting or re-writing based on host/domain name.


Right you are SB - that was one of the biggest advantages to 1.1 and arguably the reason for it's quick and ubiquitous implementation... that'd be a complete showstopper for me.

perkiset

quote author=StephenBauer link=topic=249.msg2282#msg2282 date=1182360797

Everything is sub-domained at Google for the most part.  Applause


Regarding cross-domain

Ajax

  queries: SB I'm going to be starting a new thread today or tomorrow specifically on this issue. Have done a WHOLE bunch of research lately and will be sharing - it's at once maddening, saddening and finally gladdening (jeez was that hokey)...

Stay tuned!
/p

kidplug

Well my theory was partially shot down.

I just produced a 12152 error in the browser on an

ajax

  post, and NOTHING was logged on the server...
So apparently that error is not the cause of my server-side exceptions.

However, I also just produced a timeout in the browser on the same

ajax

  post - timeout is set to 7 seconds in my

ajax

  code.
My

ajax

  code automatically retried, and when it did, I got TWO requests logged on my server.  The first one ap

pear

 s to be the "bad" one which caused a timeout in my client code, and will probably produce the "short read" exception in tomcat in a few minutes.

The second request was handled fine and returned a response to the browser.

Oh - There's the exception - exactly five minutes have passed now.

The thing is - my

ajax

  retry after timeout DOES call abort() before retrying.
It seems that in this case the abort() actually resulted in my server getting the "bad" request.
Again, my server didnt log anything on the first request that timed out, until abort() was called in the client.

In summary, it looks like my server side errors are the result of client-side

ajax

  "timeouts" which are retried.
I am already calling abort() on the failed (timed out) xhr requests, so I dont know what else to do to prevent the server side issue.

Hopefully IE7 fixes this, as is claimed...

perkiset

On a different tack -

As I hinted to SB above, I'm going to start a new thread on XHR alternatives either later today or tomorrow, based on a boatload of recent research. I am enormously frustrated with XHR in IE, and also have a need for cross-domain stuff, so I have pretty much wrapped up a new solution for myself.

I don't know if it will assist with what you've got going, but it might turn some gears...

/p

kidplug

This may be getting off topic, since I'm talking about the timeout issue, not the 12030 issue, but I've found an interesting pattern.

The timeout is occurring on a subsequent

ajax

  post in the time window of 16 seconds to 60 seconds.

Meaning, if the last request was within the previous 15 seconds, there is no problem.
Or if the last request was over 60 seconds ago, there is no problem.

But in the 16-60 second window, the subsequent request is timing out for me EVERY time.
Funny I hadn't noticed that before.

I haven't tried it yet with a regular POST from a form.
Again, this is IE6 over SSL and seems to be related to the keep-alive issue.

---
And thanks - I will check out your alternative solution(s).

perkiset

Applause Applause Applause

Are you saying that if you throw another request at exactly 15 seconds it will fly, but at 16-60 seconds it will bomb RELIABLY? That is astonishing if I'm reading you correctly... if that's the case, then the next thing to test is, is that time frame specific to <your

mac

 hine> or is that systemic... ?

Also - I just reread your post and noticed something - as you note, I am also doing a setTimeout'd retry... but the solution that I found was to separate the open() from the send() by 10ms IF the browser is IE... essentially, let any resources get up-to-date (because I've allowed the message loop to spin, I presume) before the send is dispatched. Here is that code fragment as it's laid out in my requestor:


var loader = this;
this.requestor.onreadystatechange = function() { loader.__onRTS.call(loader); }
if (this.masterStatus) { this.masterStatus.handleChange(true); }

// Set a callback to <me> in case the request takes to long...
this.timeoutHandle = setTimeout( function() { loader.__handleTimeout.call(loader); }, this.timeoutMS);

this.requestor.open('POST', theURL, true);
this.requestor.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
  if ((document.all) && (document.getElementById))
  {
  // IE
  setTimeout( function() { loader.__executeSend.call(loader)} , 10);
  } else {
this.requestor.send(this.__postParams());
}
}

ajax

 Requestor.prototype.__executeSend = function() { this.requestor.send(this.__postParams()); }


I think that this might help you out - it is the only thing that I found that reliably stopped the issue.
<i>Important note: The last version of the posted code in the repository did not contain this patch - the second to the last did... the last post has been updated.</i>

/p

kidplug

Hmm - I will try the timeout between open() and send().  I am not doing that now.

Yes you are reading correctly - from my browser - IE6 on XP, the 16 - 60 seconds is very repeatable.
When I throw in a GET request in there it changes things a bit, and if a 12030 or 12152 occurs it also changes a bit.
After one of those errors, subsequent requests > 16 seconds throw an additional 12030 - not sure how consistent this is...

But after a successful POST, a second POST in that timeframe is timing out on me every time.

I did notice something that may be useful.
During these timeout periods, the readystate is 1.
On other requests which are taking some time, due to large amount of data being returned, or intensive operation on the server, the ready state is 3 during my timeout counter.

So maybe if I detect a ready state of 1 after even 1 second, I can just abort and retry.
Better than waiting the full 7 seconds, or whatever timeout I have specified.
In fact if I am aborting these readystate==1 requests after 1 second, I can set a higher allowable timeout for requests that are actually succeeding, but just taking a while.

I'm sure I will still get these phantom requests on my server if I abort after 1 second.



kidplug

I added a setTimeout between open() and send() but that doesnt seem to change the behavior.

Are you only doing that in your retry? Or also on the initial send() ?

My retry is pretty much 100% successful as it is.


I did add the abort() after one second on readystate==1 which I think will improve user experience.
It still sends the double request back to the server though.

perkiset

quote author=kidplug link=topic=249.msg2292#msg2292 date=1182368146

I added a setTimeout between open() and send() but that doesnt seem to change the behavior.

Are you only doing that in your retry? Or also on the initial send() ?


My initial send - if the browser is IE, then I DONT send in the same function as the open() - I set up a timeout to execute the send 10ms later

/p

kidplug

This is definitely pertinent:  my

apache

  config file specifies KeepAliveTimeout 15.

That must be where the 16 seconds is coming from.

perkiset

Applause yep, that'd definitely be the case... thanks for that update - I'da pulled my hair out looking for that one

kidplug

So the errors and timeouts are clearly caused by the xhr trying to use an https connection which the server is no longer maintaining.

Good news - not happenning on IE7 on the

mac

 hines that I've tested.

I still hate having those server-side errors after the failed/timed out xhr requests.

Have you or others observed this timeout issue at all?
I wonder what the Keep Alive timeout is on your web server(s).

Thanks.

perkiset

OK:

My older framework (

apache

  proxies through to Object Pascal renderers in a reverse-proxy/firewall setup: both the front end and back end boxes have the timeout on and set to 15 - this would be the default way that they were installed I assume. Neither of them have any

AJAX

  running through them - I only tell you this because it seems to be a default way that

Apache

  installs.

My newer frameworks go through an instance of IPCop and then on to an instance of

Apache

  2 that does NOT have a keepalive directive in it... so I assume that it will use default values - I don't know what those are - have to go research that, although I'm thinking that with no directive, KeepAlive would be OFF by default... interesting...

/p

StephenBauer

quote author=perkiset link=topic=249.msg2286#msg2286 date=1182362029

quote author=StephenBauer link=topic=249.msg2282#msg2282 date=1182360797

Everything is sub-domained at Google for the most part.  Applause


Regarding cross-domain

Ajax

  queries: SB I'm going to be starting a new thread today or tomorrow specifically on this issue. Have done a WHOLE bunch of research lately and will be sharing - it's at once maddening, saddening and finally gladdening (jeez was that hokey)...

Stay tuned!
/p


Looking forward to it...

SB

perkiset

Here we go:

http://www.perkiset.org/forum/

ajax

 /it%92s_time_to_dump_xmlhttprequest-t336.0.html

/p

kidplug

I had an idea of how to "beat" the SSL timeout errors.
Before a POST, if I know x seconds has passed since the last POST, do a quick GET request, just as an "are you there", then do the POST.
I haven't built that yet, but I think it would help me avoid the dead connection / timeouts.

And, the discussion on the other thread about converting POST requests to GET got me thinking....

I could actually send some of my current POST requests as GET requests.
You said under 2k will be OK on a GET.

My

ajax

  code was sending any query string over 256 bytes as POST.
I could safely bump that up to 1k-1.5k and probably eliminate 95% of my errors, since I'm not often posting query strings bigger than 1k.

Is the 2k limit pretty safe?  Does it apply to the entire URL or just to the query portion?

Thanks.

kidplug

MS article says:

SUMMARY
Microsoft Inte

rnet

  Explorer has a maximum uniform resource locator (URL) length of 2,083 characters. Inte

rnet

  Explorer also has a maximum path length of 2,048 characters. This limit applies to both POST request and GET request URLs.

http://support.microsoft.com/kb/208427

nutballs

yea its the entire URL. And i am pretty sure that 2k is actually W3C spec, but dont hold me to that.

perkiset

Ran a couple tests per the XRPC testing and I found that it worked great up to 2000 bytes, IE, Safari and FF.

/p

Dragonlaird

OK, seems my original post found only ONE possible cause of the problem generating these damned errors, so in a bid to take up the gauntlet, I wrote a small

AJAX

  handler from scratch (again) and discovered one of the main problems causing the problem was using the POST  method. Yeah, OK... so that's old news...

On the server, my 'test' page simply displayed all form and query data and this would be sent back to the requestor... Again, nothing fancy there either...

Then I was struck by a number 10 bus... Or at least I should have been... I'd missed something really obvious...

Of the requests that actually made it through and returned the page results... None of them contained any form data...

Checked my code and it was definitely posting form data... So why wasn't the page displaying it?

I had a play around with the request headers and the format of the posted data until eventually, I found a combination that worked...

To save you reading any more and to put you out of suspense... Take a look here...

http://www.perkiset.org/forum/

ajax

 /xmlhttprequest_ie6_ssl_and_12030_error_what_is_the_solution-t442.0.html;msg37

perkiset

Here's an interesting thing:

I received a response notification from a thread I posted in re. 12030 a LONG time ago - the inference by the poster was that the 12030 error might be a problem with the fact that the PREVIOUS request is not closed... and that the problem is not the current request, but a problem with an open connection.

His (I assume it was a he) answer was to add Connection: Close to the response header coming from the server - he was not so verbose nor forthcoming, but this is an interesting possibility.

I'm gonna try it on GP... shittah it'd be funny (in a perverse sort of way) if we were all looking at the wrong end of the tiger for teeth...

pruzze

Please look here for a good explination of the problem: danweber dot blogspot dot com/2007/04/ie6-and-error-code-12030.html

/Fredrik Prüzelius
www dot dse dot se

No urls like that please

perkiset

Pruzze - you might try reading some of the threads here before claiming that your teensy little blog entry (that really doesn't fix the problem at all) claims to solve it.

Lotsa smart folks here man...
/p

pruzze

quote author=perkiset link=topic=249.msg4387#msg4387 date=1196984601

Pruzze - you might try reading some of the threads here before claiming that your teensy little blog entry (that really doesn't fix the problem at all) claims to solve it.


Thanks for the welcome... The "teensy" blog I was refering to ain't mine. I just thought that it explained the problem well. I just tried to help! I'm very sorry if that disturbed "all smart folks"...

/Fredrik

perkiset

Sorry man, was a bad day and I was grumpy.


Perkiset's Place Home   Politics @ Perkiset's