I've been using PJAX as well. Basically, when you click a link, jquery/pjax intercept the click action, make the request to the server, and on the server-side you look at the request to see if it came as a PJAX request. If-so, you just send back the "partial" section of the page that's being loaded/updated. At the same time, you use push-state to update the URL in the address-bar to conform to the page you've loaded. If the server doesn't detect that the page was a PJAX request, it should respond with the entire page contents - header, contents, footer, etc.. If somebody copies and pastes the URL into a new tab, your site will behave the same was as-if no PJAX header were passed - loading the entire page. This is the same way a search-engine spider will behave. They should "see" the same experience content-wise as a user navigating the site using PJAX.
This particular project runs in kiosk mode, more or less, so I don't care about it at all. Also, if it's just pages, rather than an application, you probably don't need this whole extra layer of junk.
What happens to SEO?