On Sense and Reference

A little bit of Frege by someone who probably doesn’t understand.

a = a holds a priori. Kant would say it’s “analytic”.

Using * to denote “the referent of”, expressions of the kind *a = *b, rather than being self-evident, instead contain extension of knowledge. We’ve gained the insight that a and b refer to the same thing: as a silly example, the learning that “the ratio of a circle’s circumference to its diameter” refers to the same thing as “the infinite sum 4 (1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \ldots )“.

We can have *a = *b without it being the case that a = b. The signs are not the same. In what ways do they differ?

Even if the referents are the same, the sense is also important.


  • a referent potentially has many signs.
  • (we hope) a sign in a given context has at most referent.
  • It may have none – consider the sign “the celestial body most distant from the earth”.

sign has a sense, refers to a referent

In natural language normally when we use words (without quotes) we intend to speak of their referent. If we want to refer to the sign itself we can quote the words. We can also call out the sense explicitly. When using natural language we quite often presuppose the existence of a referent.

We can attempt to make some definitions a tad more concrete.

  • a sense/thought is the way in which a sign refers to an object.
  • a proper name/sign/word/expression expresses/has a sense and refers to/references/designates its referent.
  • a referent is an object of some kind.
  • a sentence is a special kind of expression which has a truth-value as its referent
  • .

  • a truth-value is something which is either true or false, with no other possibilities.

Now, we postulate to the following invariant: the truth-value of a sentence should remain unchanged when we replace a part of the sentence with an expression having the same referent. The truth value of the sentence is in the space of referents and isn’t effected by the sense.
Performing a substitution may well change the sense.

Subtleties here include:

  • Due to the quirks of natural language, parts of a sentence might be implicitly quoted – the referent of the quoted expression is the sense of the unquoted expression.
  • Frege enumerates many different grammatical constructs with edge-cases.
  • A ‘part’ of a sentence may not be a contiguous string of words, so one has to be careful when performing replacements.

Apparently Russell blows a lot of this stuff up, but I’m not there yet.

Capital in the Twenty-First Century, A Free Nation Deep in Debt, Superintelligence – Three Viewpoints

Piketty’s Capital in the Twenty-First Century, MacDonald’s A Free Nation, Deep in Debt and Bostrum’s Superintelligence: Path’s, Dangers, Strategies speak about some similar topics. I wanted to draw some comparisons.

Capital in the Twenty-First Century A Free Nation, Deep in Debt Superintelligence
Inequality as an economic inevitability (ceteris paribus). Democracy as an economic imperative. Greater-than-human intelligence as an economic inevitability.
Left unchecked, capitalism (and r > g) will lead to extreme inequality. It should be the role of governments to intervene. Left unchecked, governments can tend to autocracy and tyranny. A free market in government debt can hold governments accountable.
Long-run global growth will never exceed 1.5% “History, at the largest scale, seems to exhibit a sequence of distinct growth modes, each much more rapid than its predecessor”

  • Pleistocene Society – 224000 year doubling time
  • Farming Society – 909 year doubling time
  • Industrial Society – 6.3 year doubling time
A large number of citizen bondholders ensures a government enacts the desires of its citizenry. Seed AI could be given a goal of carrying out humanity’s “coherent extrapolated volition”.
The World Wars had a large impact on the structure of capital – destroying a large amount of capital stock and so increasing labour’s share of income. The World Wars changed the role of the citizen bondholder. Firstly in the sense of measures such as rationing and compulsory saving. Secondly, the post-war globalisation means “there is nothing patriotic or intrinsically democratic about [today’s bond markets]. Credit markets still generally rank democracies more favourably than other forms of government”.
Without intervention, in the long-run labour’s share of income will decrease. This is inevitable given how return on capital (r) will be larger than growth (g). After an emergence of superintelligence, “the income share received by labour would then dwindle to practically nil”. All income will be derived from capital, whilst labour income would be driven to the cost of computation.
On the other hand, world GDP would undergo extremely rapid expansion. The doubling time could be days, hours or minutes.
Growth from population increases will become less important as world population plateaus and birth rates fall in the developing world. In western europe, population will decrease putting downward pressure on growth (all growth must come from productivity gains). Simulated minds (mind-uploading) and space colonization implies a potential for extremely rapid population growth. Alternatively, if humans do not end up owning the superintelligence revolution, Malthus rears his ugly head.
We ignore human capital throughout. Human capital! This is very important. Could also be crucial in keeping emergence of superintelligence in check.
Inequality is BAD. Inequality may have “been [one of] the greatest humanitarians”. It enabled “average level of well-being to occasionally bob up slightly above that of life at the very margin of subsistence”.

No real synthesis at this point. I think my biggest problem with Piketty is still the assumption of 1.5% cap on growth – Marx + 1.5%.

Death by Watermelon

How much watermelon is deadly?

There are many conceivable ways in which a watermelon could in fact prove fatal. One potentially toxic aspect of watermelon is right there in the name – water.

Water, when consumed in large enough quantities, can lead to hyponatremia – “insufficient salt in the blood”. At this point, osmotic pressure makes cells swell up, leading to bad things.

How much is too much? Studies aren’t entirely conclusive. The figure is somewhere around 90ml/kg of body mass. For an average adult human, around 6 litres of water may prove fatal.

For an average adult watermelon, the mass of flesh weighs around 6.3kg. At 92% water by mass, that’s 5.8 litres of water.

Eating a slightly above-average sized watermelon could prove fatal!

This is ignoring a couple of potentially important points:

  • the figure of 6 litres is for consumption in one go – eating a whole watermelon in one go would be quite a feat
  • we’re completely ignoring that watermelon isn’t just water. It will actually contain electrolytes. If you salt your watermelon, even better

Be careful.

Changing the Spark Context of an existing RDD

Spark RDDs are supposed to be Resilient. If something bad happens whilst computing, we can recover! At least, that’s the idea.

scala> val myRdd = sc.parallelize(Seq(1,2,3))
myRdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:12
scala> sc.stop

If we stop the spark context for any reason, we now find our RDD is useless!

scala> myRdd.first
java.lang.IllegalStateException: SparkContext has been shutdown
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1316)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1339)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1353)
at org.apache.spark.rdd.RDD.take(RDD.scala:1098)

This isn’t good at all! Let’s make a new spark context.

scala> val sc = new org.apache.spark.SparkContext("local[8]", "new context")
sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@542beecb

We now need to inject this back into our RDD. The spark context is stored in a private field, so we have to reach for reflection.

val rddClass = classOf[org.apache.spark.rdd.RDD[_]]
val scField = rddClass.getDeclaredField("_sc") // spark context stored in _sc
scField.setAccessible(true) // now we can access it

Now we just set the spark context.

scField.set(myRdd, sc)

Observe that this works!

scala> myRdd.sum
res5: Double = 6.0
scala> myRdd.first
res6: Int = 1

This is quite scary and probably should not be used for anything real. Additionally we had an RDD with many dependencies, we’d have to crawl the the dependencies and swap it out in every place (I think).


Another approach might be to instead produce a dynamic proxy for the spark context which allows you to point at some true spark context, and then just swap it out there.

What are we actually trying to do here? If we have a long-running application which allows users to create RDDs, it would be nice to be able to recover from spark cluster bounces. We could keep track of the operations required to produce the RDDs in the first place (which is arguably a better approach) but I decided to spend thirty minutes poking around anyway, and was pleasantly surprised at the (illusion of) progress I made!

Cosine Similarity, Pearson Correlation, Inner Products

To begin, a criticism

I picked up the Haskell Data Analysis Cookbook. The book presents examples of comparing data using Pearson Coefficient and using Cosine Similarity.

pearson xs ys = (n * sxy - sx * sy) / sqrt ((n * sxx - sx * sx) * (n * syy - sy * sy))
      n = fromIntegral (length xs)
      sx = sum xs
      sy = sum ys
      sxx = sum $ zipWith (*) xs ys
      syy = sum $ zipWith (*) ys ys
      sxy = sum $ zipWith (*) xs ys

cosine xs ys = dot d1 d2 / (len d1 * len d2)
      dot a b = sum $ zipWith (*) a b
      len a = sqrt $ dot a a

Although these code snippets are both calculating the ‘similarity’ between two vectors and actually, as we shall see, share a lot of structure, this is not at all apparent from a glance.

We can fix that however…

Definition of an Inner Product

An inner product is conceptually a way to see how long a vector is after projecting it along another (inside some space).

Formally, an inner product is a binary operator satisfying the following properties


\langle u+v,w \rangle = \langle u,w\rangle + \langle v,w\rangle
\langle \alpha u,w\rangle = \alpha\langle u,w\rangle for \alpha \in \mathbb{R}
We are saying that we can push sums inside on the left to being outside. We can also push out constant factors.

(Conjugate) Symmetry

\langle u,v \rangle = \langle v,u \rangle or in the complex case, \langle u,v \rangle = \overline{\langle v,u \rangle}
In the real case, we’re saying everything is symmetric – it doesn’t matter which way you do it. In the complex case we have to reflect things by taking the conjugate.

Positive Definiteness

\langle u,u \rangle \ge 0 with equality iff u = 0
Here we’re saying projecting a vector onto itself always results in a positive length. Secondly, the only way we can end up with a result of zero is if the vector itself is of length 0.

From Inner Product to a notion of ‘length’

Intuitively a distance between two things must be

  • positive or zero (a negative distance makes not too much sense), with a length of zero corresponding to the zero vector
  • linear (if we scale the vector threefold, the length should also increase threefold)

Given that \langle u,u \rangle \ge 0 we might be tempted to set length(a) := \langle u,u \rangle but then upon scaling u \rightarrow \alpha u we get length(\alpha u) := \langle \alpha u, \alpha u \rangle = \alpha^2 \langle u,u \rangle – we’re not scaling linearly.

Instead defining ||a|| := \sqrt{\langle a,b \rangle} everything is good!


Now, in the abstract, how similar are two vectors?

How about we first stop caring about how long they are, and want them just to point in the same direction. We can project one along the other and see how much it changes in length (shrinks).

Projecting is kind of like seeing what its component is in that direction – i.e. considering 2-dimensional vectors in the plane, projecting a vector onto a unit vector in the x direction will tell you the x component of that vector.

Let’s call two vectors a and b.

Firstly let’s scale them to be both of unit length, \hat{a} = \frac{a}{||a||}, \hat{b} = \frac{b}{||b||}

Now, project one onto the other (remember we’re not caring about order because of symmetry).
similarity(a,b) := \langle \frac{a}{||a||}, \frac{b}{||b||} \rangle

Using linearity we can pull some stuff out (and also assuming everything’s happily a real vector – not caring about taking conjugates)…
similarity(a,b) := \frac{\langle a, b \rangle}{||a|| ||b||}

Making Everything Concrete

Euclidean Inner Product

The dot product we know and love.
a \dot b = a_1 b_1 + \dots + a_n b_n

Plugging that into the similarity formula, we end up with the cosine similarity we started with!

Covariance Inner Product

The covariance between two vectors is defined as Cov(X,Y) = \mathbb{E}((X - \mathbb{E}(X))(Y - \mathbb{E}(Y))) where we’re abusing the notion of expectation somewhat. This in fact works if X and Y are arbitrary L2 random variables… but for the very concrete case of finite vectors we could consider \mathbb{E}(X) = \frac{1}{n}(x_1 + \dots + x_n).

We’ve said in our space, to project a first vector onto a second we see how covariant the first is with the second – if they move together or not.

Plugging this inner product into the similarity formula, we instead get the pearson coefficient!

In fact, given Cov(X,X) = Variance(X), in this space we have length(X) = \sqrt{Variance(X)} = StdDev(X) =: \sigma_X,

i.e. similarity(X,Y) = \frac{Cov(X,Y)}{\sigma_X \sigma_Y}.

Improving the code

Now that we know this structure exists, I posit the following as being better

similarity ip xs ys = (ip xs ys) / ( (len xs) * (len ys) )
   where len xs = sqrt(ip xs xs)

-- the inner products
dot xs ys = sum $ zipWith (*) xs ys

covariance xs ys = exy - (ex * ey)
   where e xs = sum xs / (fromIntegral $ length xs)
         exy = e $ zipWith (*) xs ys
         ex = e xs
         ey = e ys

-- the similarity functions
cosineSimilarity = similarity dot
pearsonSimilarity = similarity covariance

Things I’m yet to think about

…though maybe the answers are apparent.

We have a whole load of inner products available to us. What does it mean to use those inner products?
E.g. \langle f,g \rangle = \int^{-\pi}_{\pi} f(t) \overline{g(t)}  \, \mathrm{d}t on \mathbb{L}^2[-\pi,\pi] – the inner product producing the Fourier transform. I’m not the resulting similarity is anything particularly special though…

Notes: First Steps with the Interactive Brokers API (OS X Mavericks)


IB Workstation

In order to use the API the IB Workstation must be installed and running – one then connects to this using the API. Installation is available for Mac, Linux and Windows – for mac the automatic pkg is here.


Next step is to download the API – a collection of java source files which handle talking to the IB Workstation. Download the jar and extract it somewhere.

IB Controller

Manually running the GUI app each time/manually clicking through dialogs etc, is a pain. To mitigate this the IB Controller github project provides a wrapper around launching/controlling the app. Getting this working was just a matter of following the userguide.

Creating a Simple Automated Client (Eclipse)

Ultimately this’ll be done with gradle, but for now let’s just quickly use eclipse.

  • Create a new eclipse project
  • Add the folder ‘javaclient’ from wherever the API was extracted as a linked source folder (right click on project -> build path -> link source folder)
  • We’ll use classes in com.ib.controller
    • instantiate an ApiController
    • instantiate a NewContract
    • instantiate a NewOrder
    • call the controller’s placeOrModifyOrder method (for now just use a no-op IOrderHandler)

Next Steps

I feel it’s important for this type of thing (i.e. time sensitive, immediate monetary penalties for errors) to focus on verification/programming defensively.

Thinking in pseudocode I feel like I want to place an order, and be notified if it’s not filled in $timeout seconds/any other error happens. I also feel like I want to maintain a local version of the truth, and always compare this against what the API reports as the truth before doing anything.


List of Datasets, List of Lists of Datasets

Consider this sort of a public-facing list of datasets I’ve found interesting, have played with or want to play with.

List of Datasets

Lending Club

Peer to peer credit marketplace Lending Club publishes data on issued and declined loans. https://www.lendingclub.com/info/download-data.action

World Health Organisation

The WHO publishes many interesting datasets at http://www.who.int/research/en/. They don’t however do a great job of linking to the raw datasets: http://www.who.int/healthinfo/statistics/mortality_rawdata/en/ is a comprehensive dataset providing mortality rates for all reporting countries, but difficult to find from the navigation.

New York Times

The New York Times has a fairly comprehensive open api, documented at http://developer.nytimes.com/docs


The Chicago public cycle hire scheme (akin to New York’s Citibike, London’s Barclays Boris Bike) published data on 750 000 trips made for their data challenge. http://divvybikes.com/datachallenge


Outpan aims to provide a single database for turning barcodes into product information. Not extremely complete. http://www.outpan.com/index.php


Under the efforts of transparency, a dataset containing information around usage of Medicare. Could make a complement to some of the other medical datasets available. http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Physician-and-Other-Supplier.html

List of Lists of Datasets