perkiset

Bombastic you say? Heresy? Nonsense. XMLHTTPRequest has been a nice try for browsers but really just doesn't make it all the way yet. Here I'll post my issues, followed by a different technique you can put to use today.

The reason for this thread is that I have an application I’m wrapping up that requires cross domain RPC. At first, looking at the discussions about it, I thought that XHR would fit the bill if done correctly. That is not true. I believed the propoganda that XHR is the best way to do remote scripting – that is also not necessarily true. So I researched my brains out to find how others are doing it and worked through a class to do my own RPC.

XHR Negatives: Applause
* XHR is implemented differently in different browsers. XHR is implemented differently in different versions of IE. Each version has it’s own idiosyncrasies and issues. XHR cannot be used in older browsers.
* XHR is security locked via the Same as Origin policy ie., you can only make requests to the domain where the PAGE came from. I would not have this argument if you could make requests to where the script came from, but that’s not what’s happening. There are a lot of valid and in fact forward looking reasons to allow cross domain RPCs – the most immediate thought that comes to mind is a mashup. XHR cannot do mashups without some serious assistance.
* XHR has problems. Natch, the 12030 problem and that entire thread. IE and XHR and SSH do not get along at all. XHR is not completely reliable as a communication mechanism, which makes

programming

  around it more troublesome.
* XHR requires a significant amount of code. Some would argue that it does not take much to get an XHR request off. That is true. But to make it robust and stable for all possible browsers does take a lot of code – and quite a bit of experience with it to make sure you know what’s going right or wrong. An alternative is to use someone else’s library - but do you want to put a mission-critical system up where you don’t know how it all works and can walk through debugging it?

XHR Positives:  Applause
* XHR does provide a strong remote scripting capability. It allows you to send packets of essentially any shape up to the server of origin, and receive packets of just about any shape for client-side interpretation. XHR is not the only way of doing this, but it does provide a reasonable framework for doing so.
* XHR can POST where most other mechanisms can only GET. This is important if you have really big amounts of data you want to get up to the server. If you have tiny (less than 2K) in total data per request that you need to get to the server, GET will do fine. IF you need more, POST, or else a packetizing mechanism with GET, is required.
* XHR is bound to the Same as Origin policy for security. Yes, I posted this in the negatives above as well. This can be seen as a huge benefit to the security minded, whereas the technique I’ll demonstrate below could potentially be compromised or misused in a way that makes it extraordinarily dangerous. XHR is very safe.
* XHR will allow more connections to the server by default than other mechanisms, which may rely on  the threads that a browser will give you.
* XHR is unconcerned with the mechanism used at the server, or the return packet. No cooperation or cooperative code is required between the client and server.

My class: XRPC or Crossdomain RPC.

Benefits of XRPC:  Applause
* It is absolutely unbound to any domain or security policy. It violates every security policy of every browser out there effortlessly.
* It is very fast. It uses

javascript

  in the most native way possible, as well as transmission being bound to the <script> tag, so it is high priority.
* The techniques that make this possible have been around since the beginning of

JavaScript

  time ie., if a browser can run any version of

JavaScript

 , it can execute this form of RPC. It is even more broadly accepted than iFrames.
* It’s small and tight. There are very few moving parts. There’s no multiple stages of connection, response codes and such – the transmission comes back or it doesn’t. Period.
* It is unlikely that the techniques used here will get modified in any significant way by the JS Language or the WWW council at any time in the foreseeable future.

Potential downsides of XRPC:  :-
* It is absolutely unbound to any domain or security policy. It violates every security policy of every browser out there effortlessly. Some might see this as a possible problem Applause
* Debugging is, at first, a little more interesting. Messages about problems come back at you differently and it’s weird.
* Concurrency is weird. The browser will confine you to as many threads as it sees fit to go get your packets. This doesn’t stop your code, nor does it break it, but it can slow down responses a bit. If you need a hugely concurrent application this might not be the right fit for you.
* It takes more client-server cooperation to work than XHR – it is potentially less autonomous than XHR.

After researching a boatload of different techniques as broad as ActionScript, using images and cookies, extending XHR, various proxy solutions and Java, I settled on what is know as the Script Tag Hack.

This is a technique that you certainly know already, you’ve just not thought about it this way. You can call for a

javascript

  file from any location. Think about it – you do not need to call for

javascript

  from your home domain. And JS file names do not need to be .js – they can be .

asp

  or .

php

  or .pl and they can also have parameters. Additionally, using DOM techniques, we can “write” to the page a new script tag dynamically – meaning that we can spontaneously call for a new script that was never on the page in the first place.

So I created a little class that creates parameters and appends a new script tag to the bottom of the body tag on a document (it is invisible, just like any other script tag). The next steps are to send back a response that is interpretable, and lastly garbage collection. Interpretability is done by a small script at the server which creates valid JS and calls a handler that you define with the results. It is wrapped in a try/catch block, so if there are any troubles with the inbound script or processing in your handler, you’ll know it straight away. Garbage collection is handled by me placing another small instruction in the return packet that, after the call has been completed, looks for the script tag that was added to the DOM and removes it.

Without further ado, here’s the code for the client side portion of the XRPC:

__XRPC_UsageCounter = 0;
function XRPC(opURL)
{
if (!opURL) { var opURL = ''; }

this.url = opURL;
this.lastRequest = '';
this.__isIE = ((document.all) && (document.getElementById));

this.clearParams();
}
XRPC.prototype.clearParams = function() { this.__parmArray = new Array(); }
XRPC.prototype.param = function(pName, pValue) { this.__parmArray[pName] = pValue; }
XRPC.prototype.exec = function(handler)
{
if (!handler) { return alert('You must call XRPC.execute with a receiving function name as the handler'); }

var temp = new Array();
var ptr = 0;
for(parm in this.__parmArray)
temp[ptr++] = parm + '=' + escape(this.__parmArray[parm]);

var tempDate = new Date();
var uid = tempDate.getTime() + '_' + __XRPC_UsageCounter++;
this.lastRequest = this.url + '?uid=' + uid + '&h=' + handler + '&' + temp.join('&');

var newNode = document.createElement('script');
if (this.__isIE)
{
newNode.id = uid;
newNode.src = this.lastRequest;
} else {
newNode.setAttribute('id', uid);
newNode.setAttribute('src', this.lastRequest);
}
var target = document.getElementsByTagName('body');
target[0].appendChild(newNode);
}


Usage is fairly straight forward:
req  = new XRPC(‘http://www.someOtherDomain.com/aFile.

php

 );
req.param(‘theName’, ‘theValue’);
req.param(‘anotherName’, ‘anotherValue’);
req.exec(‘myHandler’);

At this point, this would have been thrown out as a <script src=> sort of request:
http://www.someOtherDomain.com/aFile.

php

 ?uid=1182389169957_0&h=myHandler&theName=theValue&anotherName=anotherValue

Now at the server, I need to create some data and return a response:

<?

php

 

XRPCResponse($_GET);

function XRPCResponse($theData)
{
$standAlone = ($recvFunc = $_GET['h']) ? false : true;
$garbage = ($uid = $_GET['uid']) ? "setTimeout('try { temp=document.getElementById("$uid"); tempParent=temp.parentNode; tempParent.removeChild(temp); } catch(e) { };', 1000); " : '';

if ($standAlone)
{
echo "$theData $garbage";
} else {
$theData = json_encode($theData);
echo <<<JS
try { $recvFunc(eval('$theData')); } catch(err) { alert("XRPC Local error: " + err.description); }
$garbage
JS;
}
}
?>


In this example, I am simply returning the contents of the global

php

  $_GET variable.

The results of the function look like this (broken up for readability)

try { myFunc(eval('{"uid":"1182389169957_0","h":"myHandler","theName":"theValue","anotherName":"anotherValue"}'));
} catch(err) {
alert("XRPC Local error: " + err.description); }
setTimeout('try {
temp=document.getElementById("1182389169957.0");
tempParent=temp.parentNode;
tempParent.removeChild(temp);
} catch(e) { };', 1000);


Note that this is completely viable

JavaScript

  – that’s the important part. If this has a syntax error we have big troubles. Basically this script attempts to call <your handler> with a JSON version of the data I want returned. If there’s a problem, I alert with the error message in the first catch. Then I set a timeout for 1 second later to try and kill the node that contains the script. If it fails, it gives up quietly.

There’s really not much more to it than that. I am now retrofitting a bunch of code with this technique and am satisfied with it’s performance and stability. I welcome comments and flame on why I’m doing it wrong Applause

Things that I should think about in the future
* It probably needs a timeout and reissue or fail mechanism. Even thought the <script> tag is reliable, if there's a

net

 work problem and it doesn't come down, it needs a failover protocol.

/p

StephenBauer

With a cursory read:  Nice.  Very nice.  Applause

I'm going to have to play around with this a bit.  Applause

Kudos,
SB

StephenBauer

I was mulling this over in the back of my head while I was unwinding in front of the boob tube for a bit.  You are pulling the

javascript

  down from some other domain but it will then be run under the security context of the page that is pulling it down (i.e. not the security context of the other domain).

It is great for pulling down other domain

javascript

  that is dynamic oriented but a lot of the time, or most of the time, fishing for the "dyna

mac

 y" (made up word) is not doable as it is usually sheltered in another security context usually by way of an iframe.  No?

I do like the cleanup

asp

 ect!  Maybe even segment out the other

javascript

  a little more and clean up a few more of the clues.  (Just throwing that out there without really thinking about the possibility much.)

Please open my eyes if I am off base here.  Or show me where it comes in handy as I may be a bit thick tonight.  I'm probably applying it only to evil thoughts instead of something more useful or such.

SB

perkiset

quote author=StephenBauer link=topic=336.msg2316#msg2316 date=1182491085

It is great for pulling down other domain

javascript

  that is dynamic oriented but a lot of the time, or most of the time, fishing for the "dyna

mac

 y" (made up word) is not doable as it is usually sheltered in another security context usually by way of an iframe.  No?

Not sure I understand you here... the hack works because any script pulled from anywhere lives under the security context of the original page location - so communication between them is transparent. This effectively makes *real* cross domain mashups smooth and effortless. The problem, is that since you're calling for a script, a malicious mashup component could do some nasty things with the surfers

mac

 hine.  Applause

But truthfully, that is not why I built it. I wanted reliable, cross domain RPC over SSH and XHR does not do it.

quote author=StephenBauer link=topic=336.msg2316#msg2316 date=1182491085

I do like the cleanup

asp

 ect!  Maybe even segment out the other

javascript

  a little more and clean up a few more of the clues.  (Just throwing that out there without really thinking about the possibility much.)

Sorry SB, don't get this one either - the node that the script was created on is eliminated in the garbage collection, so there is no trace of it left. I could, in fact, bump the time-to-clean down to, say, 1 ms and then the very instant after the initial callback was made the script would be gone. Note that, in this example I am calling the page-defined callback script immediately. I could call other stuff, or even pass entire blocks of code to the surfer before I call his call back (or after) - the evil possibilities are rather endless. That's why, in a WH situation this would be a dicey proposition between two untrusting entities. Dicey at best, really.

/p

kidplug

I think for "remote scripting" that will work very well.
And if you need to call another domain I guess you have to do it that way.
(Although I could swear I've used

ajax

 /xhr to GET data from another domain - maybe over http, not https...?)

Moving on...
However, xhr lets me "fetch" big pieces of html from the server and stick them into my exisiting page.
I think you would have a hard time (at least messy) accomplishing that with this <script> technique.

Also, you are limited to GET requests.
I am using

ajax

 /xhr to post data back to the server - saving data from arbitrarily sized forms which may exceed the limits of a GET.

The IE6 SSL errors and timeouts seem to occur only on POST, so if I was willing to make all my requests GET requests I could do that with xhr and avoid the SSL errors for the most part.

Just some thoughts...

perkiset

quote author=kidplug link=topic=336.msg2319#msg2319 date=1182530013

I think for "remote scripting" that will work very well.
And if you need to call another domain I guess you have to do it that way.
(Although I could swear I've used

ajax

 /xhr to GET data from another domain - maybe over http, not https...?)

I coulda swore that as well... but upon deep and hard research I was never able to find that out exactly. Google makes it *look* like that, but there is a lot of proxying going on and who really knows. And all the documentation, forums and whitepapers I've read say that it should not ever be able to. Caveat: If you go into IE and make another domain a "trusted domain" then you can get past it (not in any other browser though) - so it can be done with targeted and specific surfer involvement, but this is not a real world solution to my thinking.

quote author=kidplug link=topic=336.msg2319#msg2319 date=1182530013

However, xhr lets me "fetch" big pieces of html from the server and stick them into my exisiting page.
I think you would have a hard time (at least messy) accomplishing that with this <script> technique.

Not at all - in fact, it would be faster. By using a similar technique as I describe above, simply pass the name of the node where you want the HTML to land, then make the

javascript

  come back like this:
document.getElementById('myNode').innerHTML = '[ the HTML I want to place]';

alternately, have it call the handling function as above, but rather than passing a JSON encoded array, pass back a simple string - which is your html, so the function handler would do this:
function myHandler(theResult) { document.getElementById('myNode').innerHTML = theResult; }

... in both cases, this would move more quickly than XHR in my opinion. Personally, I don't see this as messy either, but that's only my opinion.


quote author=kidplug link=topic=336.msg2319#msg2319 date=1182530013

Also, you are limited to GET requests.
I am using

ajax

 /xhr to post data back to the server - saving data from arbitrarily sized forms which may exceed the limits of a GET.

Truth. And as I mentioned above, if you have more than 2K of upload data this may not be the right fit - or you'll need to packetize it into multiple GETs as is done in the

Ajax

 Extended library.


quote author=kidplug link=topic=336.msg2319#msg2319 date=1182530013

The IE6 SSL errors and timeouts seem to occur only on POST, so if I was willing to make all my requests GET requests I could do that with xhr and avoid the SSL errors for the most part.

The truth is, that if XHR works for you, then it's the best tool for the job. In your previous assertion, a GET would be confining, but in this one you would convert to a GET to avoid the SSL issues... that's pretty much my reason for switching. I find that there are many nooks and crannies that "get me" with XHR and I like a single, cross platform, cross browser, cross browser version and cross protocol solution - so the XRPC method works for me because it will work anywhere. I was a real believer in the XHR - but after so many caveats and rewrites of my stuff to handle its little shitties, I'd had enough and wanted a more robust and stable solution.

BTW - I think where it's running (in the application where I needed this in the first place), it's actually performing more quickly than XHR. Thus far, I am *very* pleased with having made this step.

/p

nutballs

The only time i can think of where the GET size limitation would be factor is in user entered data, in particular free form TEXTAREAs. beyond that though, i would think that most form communications can be "codified" or sent as single elements as each form field is completed. To me sending each form element as its completed would be ideal anyway, since I can then capture partial data even if the user decides to bail. The only one that would be an issue would be a "tell us in detail about your thing" type of fields, and that could be packetized, which at least conceptually doesnt seem that big an issue. (then again, i do no

ajax

  so... take it with a grain of salt).

perkiset

quote author=nutballs link=topic=336.msg2322#msg2322 date=1182531695

The only time i can think of where the GET size limitation would be factor is in user entered data, in particular free form TEXTAREAs. beyond that though, i would think that most form communications can be "codified" or sent as single elements as each form field is completed. To me sending each form element as its completed would be ideal anyway, since I can then capture partial data even if the user decides to bail. The only one that would be an issue would be a "tell us in detail about your thing" type of fields, and that could be packetized, which at least conceptually doesnt seem that big an issue. (then again, i do no

ajax

  so... take it with a grain of salt).


I think you're right on... and taking your example, let's say a messaging service (Tell us what your problem is) - IMO, I'd probably do that as a POSTed form in any case, rather than using an

AJAX

  like mechanism. Personally, I am not for the replacement of all things POST FORM with

AJAX

  - RPC is a great tool, but not a complete replacement. Again, IMO, an AJA mechanism (note the on-purpose elimination of the X hehe) should be used for small, tight communication between an app and the server... not for wholesale data exchange.

And BTW - you'll be doing more AJA than you ever wanted to soon Applause

/p

nutballs

Applause

perkiset

Applause

thedarkness

quote author=perkiset link=topic=336.msg2318#msg2318 date=1182529845

But truthfully, that is not why I built it. I wanted reliable, cross domain RPC over [font=Verdana]SSH[/font] and XHR does not do it.


Did you mean SSL perk? you asked me for comments on this perk and as usual I'm late to the party but I don't really have any comments except to nod sagely in agreement with you and then hope to hell I can get it to do what i need it to do. I think it's the best tool we've got at the moment.

Cheers,
td

perkiset

SSL, SSH, whatever it takes (bad reference to an old Michael Keaton movie... anyone? Anyone? SB?)

Yup, SSL is what I meant, thanks TD.

nop_90

pretty cool.

I think

ajax

  have it place in things like webapps.
I was going to make an webapp for inventory, requirement would be they have to use FF browser.

I think ur idea might have uses in other areas also.
I will have to think about it for a while.

perkiset

Knowing where several of us hang out, and what that mind of yours will do with this notion is rather frightening. I mentally went down a lot of paths with this stuff and there are a lot of very black possibilities.

Applause

Applause /p

m1t0s1s

There's also COMET, more server side, rather than client refreshing.

http://en.wikipedia.org/wiki/Comet_%28

programming

 %29

m1t0s1s

perkiset, did you just hit your own forum with spam? What was that harkening elegance post all abovt?

perkiset

quote author=m1t0s1s link=topic=336.msg2491#msg2491 date=1183660175

perkiset, did you just hit your own forum with spam? What was that harkening elegance post all abovt?


Interesting that you read that... I created a post and then deleted it - I was testing the SMF Forum software for it's encoding of double and single quotes... (which it did expertly) and now I am walking through the code to see what it did. But I deleted the post almost as quickly as I created it... where are you seeing that?

Thanks!
/p

m1t0s1s

quote author=perkiset link=topic=336.msg2492#msg2492 date=1183661526

quote author=m1t0s1s link=topic=336.msg2491#msg2491 date=1183660175

perkiset, did you just hit your own forum with spam? What was that harkening elegance post all abovt?


Interesting that you read that... I created a post and then deleted it - I was testing the SMF Forum software for it's encoding of double and single quotes... (which it did expertly) and now I am walking through the code to see what it did. But I deleted the post almost as quickly as I created it... where are you seeing that?

Thanks!
/p



It was from the smf notification, archived forever, in gmail. Speaking of which, I just found the blog of the creator of gmail, Paul Buchheit


Perkiset's Place Home   Politics @ Perkiset's