Some pros and cons
Pros for polymorphic:
- A smaller polymorphic interface is easier to read. I only have to remember one method.
- It goes with the way the language is meant to be used - Duck typing.
- If it's clear which objects I want to pull a rabbit out of, there shouldn't be ambiguity anyway.
- Doing a lot of type checking is considered bad even in static languages like Java, where having plenty of type checks for the type of the object makes ugly code, should the magician really need to differentiate between the type of objects he's pulling a rabbit out of?
Pros for ad-hoc:
- It's less explicit, can I pull a string out of a
Cat
instance? Would that just work? if not, what is the behavior? If I don't limit the type here, I have to do so in the documentation, or in the tests which might make a worse contract.
- You have all the handling of pulling a rabbit in one place, the Magician (some might consider this a con)
- Modern JS optimizers differenciate between monomorphic (works on only one type) and polymorphic functions. They know how to optimize the monomorphic ones much better so the
pullRabbitOutOfString
version is likely to be much faster in engines like V8. See this video for more information. Edit: I wrote a perf myself, it turns out that in practice, this is not always the case.
Some alternative solutions:
In my opinion, this sort of design isn't very 'Java-Scripty' to begin with. JavaScript is a different language with different idioms from languages like C#, Java or Python. These idioms originate in years of developers trying to understand the language's weak and strong parts, what I'd do is try to stick with these idioms.
There are two nice solutions I can think of:
- Elevating objects, making objects "pullable", making them conform to an interface on run-time, then having the Magician work on pullable objects.
- Using the strategy pattern, teaching the Magician dynamically how to handle different type of objects.
Solution 1: Elevating Objects
One common solution to this problem, is to 'elevate' objects with the ability to have rabbits pulled out of them.
That is, have a function that takes some type of object, and adds pulling out of a hat for it. Something like:
function makePullable(obj){
obj.pullOfHat = function(){
return new Rabbit(obj.toString());
}
}
I can make such makePullable
functions for other objects, I could create a makePullableString
, etc. I'm defining the conversion on each type. However, after I elevated my objects, I have no type to use them in a generic way. An interface in JavaScript is determined by a duck typing, if it has a pullOfHat
method I can pull it with the Magician's method.
Then Magician could do:
Magician.pullRabbit = function(pullable) {
var rabbit = obj.pullOfHat();
return {rabbit:rabbit,text:"Tada, I pulled a rabbit out of "+pullable};
}
Elevating objects, using some sort of mixin pattern seems like the more JS thing to do.
(Note this is problematic with value types in the language which are string, number, null, undefined and boolean, but they're all box-able)
Here is an example of what such code might look like
Solution 2: Strategy Pattern
When discussing this question in the JS chat room in StackOverflow my friend phenomnomnominal suggested the use of the Strategy pattern.
This would allow you to add the abilities to pull rabbits out of various objects at run time, and would create very JavaScript'y code. A magician can learn how to pull objects of different types out of hats, and it pulls them based on that knowledge.
Here is how this might look in CoffeeScript:
class Magician
constructor: ()-> # A new Magician can't pull anything
@pullFunctions = {}
pullRabbit: (obj) -> # Pull a rabbit, handler based on type
func = pullFunctions[obj.constructor.name]
if func? then func(obj) else "Don't know how to pull that out of my hat!"
learnToPull: (obj, handler) -> # Learns to pull a rabbit out of a type
pullFunctions[obj.constructor.name] = handler
You can see the equivalent JS code here.
This way, you benefit from both worlds, the action of how to pull isn't tightly coupled to either the objects, or the Magician and I think this makes for a very nice solution.
Usage would be something like:
var m = new Magician();//create a new Magician
//Teach the Magician
m.learnToPull("",function(){
return "Pulled a rabbit out of a string";
});
m.learnToPull({},function(){
return "Pulled a rabbit out of a Object";
});
m.pullRabbit(" Str");
Having a public API for data access from your site is about making the data available in a convenient, supported, well-defined and always-up-to-date manner. It is a way for a site owner to say 'here is data I collect and own, but I want you to be able to use it so I'm making it available. Oh, and I promise not to change the structure or do anything that might break your applications without communicating about it clearly'.
Crawling has some technical limitations, some very important legal considerations AND is prone to breaking without any sort of notification from the owner of the data. Personally I would not hesitate to consume a public JSON API if that has data I need, but I'd be hard pressed to start writing a crawler/parser to get it off a website...
Best Answer
You need several types of protection.
Firstly, you need to prevent Site A's key from being used on Site B.
In theory, if the key is bound to a domain, you can't depend on the
referer
header, but because you're client is embedding a script directly, you can reasonably rely on thedocument.location
on the client-side. Sending that location (or portions of it) to the server directly is unreliable; but you can use it to generate a session key:client_key
in request for API library.session_key
usinghash(document.location.host + session_salt)
.session_key
+client_key
for an API call.client_key
's host and "salt" in the session, computing the hash, and comparing to the providedclient_key
.Secondly, you need to impede Hacker Hank from opening the debug console or using a modified client on Site A to do whatever he wants with your API.
Note though, that it's very difficult, if not impossible, to completely prevent Hacker Hank from abusing the API. But, you can make it more difficult. And the most reasonably way to impede Hank, that I'm aware of, is rate limiting.
Thirdly, as you're likely already doing: encrypt the traffic. Sure, the NSA will see it; but Hacker Hank is less likely to.