The other thing is that they have partially fixed the most "wat" bug in PHP: for some arrays that aren't even all that hard to come by when interacting with JS, the following code could change `count($arr)`:
The issue is that `$key` could be a string describing an integer, but `$arr[$key]` will automatically see that you're using a string-int and convert it to an int-int, setting a different key than the internal one.
It's only a partial fix but it hits the most common case: you have a JSON payload that happens to have an index of objects which happens to have been given numeric keys:
If your framework hands JSON to you as a StdClass object then you would typically convert this to an associative array since that's semantically what it is: as opposed to the internal entities, which in this case appear to be full-fledged objects. (The difference is that the keys of a full-fledged object should be known in advance at a data-schema level and have a control structure of getting/setting `->prop_name`; the keys of a dictionary should be user-settable and have a control structure of `foreach ($dict as $key => $val) {}`.)
So the bug still maybe exists in some fringe cases as the underlying cause is not treated, but it is now autofixed by the common idiom of casting `$dict = (array) $params->dict`.
Seems like a solid gradually improving release and according to some benchmarks [1] and notes from the development team, improved performance as well. Nothing like the jump the ecosystem enjoyed from 5.6 to 7.0, but still around 10% on average.
What is PHP like nowadays? I know it has a bad reputation and a history of kitchen sink design but have they managed to tame it into something more sensible? Do the docs help steer developers away from legacy issues and common bad patterns?
The modern ecosystem (Symfony 3/4 especially) and practices enabled by 7.0+ are good, but all the old horror is still lurking behind the curtain, and there's not much to prevent naive coders from writing garbage like it's 1999. It's not that dissimilar from the Javascript situation, where you have some people still writing spaghetti jquery in random globally-namespaced places and some people in basically a whole other world writing type-checked and compiled modular code.
Some important things like sensible fast native data types (php-ds) are still in extensions instead of core because they're recent developments, and there are still a lot of traps and dead-ends from past failures in the standard library, but it's improving with each release. The addition of sodium to core in 7.2 is a really important step forward.
Four words : composer, symfony, laravel and php-fig.
The php ecosystem is not what it was 4 years ago. Although due to backward compatibility the languages still has a lot of cruft. But this means old code often works directly with the last version and benefits from its performance improvement.
If your new project is about receiving an http request and send a response like a REST API I think it is a good tool for the job. If you want long lived process (kafka/rabbitmq consumers for example) or sockets it is possible to do it in php but you'll have a better experience with other stacks.
I don't get the love for Laravel. Things like validation, specifying validators in strings like it's 2003 grate with me. Symfony's a pain in the butt for over-engineering things (DataTransformers are a good place to start) but at least it's well thought through.
The kind of worn out ecosystem is my main dig with PHP. A couple parts of the ecosystem are outstanding, especially laravel, though laravel’s DId internals are a maze. (side note: Not a fan of DI being more or less required - it’s necessary for testing but the pattern is just a mess. Lack of static typing makes PHP kind of mediocre at DI, too)
Outside of that there’s a ton of abandonware or just generally unmaintained libraries, though, and daemon/batch style processing is a tremendously common requirement that’s better served by other stacks.
Having used both Laravel and Spring, I don't see Laravel being much worse at DI. Spring is just doing everything by Strings anyways, so if you're going to get failures they'll still be at runtime.
For new projects only if you have an established php team / related projects.
Modern OOP development in PHP 7 feels basically like an older version of Java (pre Generics), solid for engineering, but ugly for anyone who expects newer language features.
Agreed. We use PHP for automation scripts (this decision was made a long time ago). In terms of features it's a good language but it's not fun to work with.
I think the mix of static typehinting and dynamic language is actually kind of well balanced now in version 7, but maybe that's just me leaving ruby's magic behind after many years and happy about IDE support and static checks ;)
I don't know what is about PHP. Objectively it's a nice language and the direction it has taken with version 7 looks very good. But somehow I don't enjoy it. C#, C++ or Rust motivate me to explore new approaches and are stimulating but I don't get that vibe from PHP. It just feels like tedious work.
Yeah I can absolutely relate to that. Not sure about C# being more motivating (from some limited experiments I made), but php is definitely not an inspiring language. Or it leads your inspiration elsewhere, like generating code and only using php as a target language, etc. But over time having a language with a limited feature set is kind of a plus too, just because you can't get too crazy either, and it's easier to read other people's code and introduce new team members.
A lot of PHP folks nowadays try to distance themselves from the simplicity that made PHP popular. Avoid falling into AbstractInterfaceAdapterFactory trap and use it like it was conceived to be used; simple and procedural (https://toys.lerdorf.com/archives/38-The-no-framework-PHP-MV...). Otherwise, just use Java.
I don't know whether the alternatives are objectively better, except maybe in certain domains.
Mostly, people seem not to like PHP because they find it ugly and awkward... which is valid, but subjective. The major alternatives (JS, Ruby and Python) have documentation, frameworks and reasonably comprehensible paths to an MVP, although one could argue that PHP is still slightly better in terms of speed of deployment. Everything seems to have converged and normalized to the point that the factors affecting one's choices primarily have less to do with the language itself and more to do with the ability to hire for it, and whether or not you like significant whitespace.
I strongly disagree with this statement. 10 years ago this would be correct, but as you said yourself the tooling is so normalised that pretty much everything is an alternative to PHP.
WAMP is a Windows web development environment; it is a simple way to install / run Apache2, MySQL, and PHP on your Windows machine for local development.
Heroku is a deployment solution which packages an app’s code and dependencies into virtualized Linux containers that are designed to execute code based on user-specified commands.
> PHP is still slightly better in terms of speed of deployment
When it comes to securing the deployment I think Python/Ruby would be better though - if you factor that in I would say it would be faster for me to deploy a python app than configure php to be secure.
That's probably true - other languages benefit from learning from PHP's mistakes and requiring a framework to begin with to enforce a lot of the safety that PHP doesn't have out of the box, and that's not mentioning any safety that would come from the language itself.
People say PHP is a framework, but as a framework it's terrible enough that you still need a framework for the framework.
You have to write proper configs for php-fpm, configure apache if you use that etc. All of this takes time, its very easy to deploy php app, but configuring server to sandbox it properly takes time and should be accounted for.
This response doesn't float right with me. Are PHP fpm configs horribly wrong by default? Do you not put a service like nginx/haproxy/apache/gunicorn/caddy in front of Ruby/python for a serious production deploy? Do you legitimately expect to deploy ruby/python without spending a minimum of an hour going over configs and assuming everything is right for your application by default while also assuming the opposite for PHP? I read the PHP configs for fun micro optimizations. It takes less than 10 minutes to find your framework's best practice list and apply them.
Personally working with PHP at a pretty large scale (hundreds of thousands of requests per second), I think it’s scaling is far simpler, easier, and cheaper than a lot of other options I have used, in particular node.
The simple threading model applies to many cores and across a large fleet of servers with basically zero effort.
If you’re looking towards hosting on shared style infra, php is still the choice. Also if you just have experienced PHP devs
Python is a pretty great alternative - it gives you more breadth to go wide than PHP does (batch processing, hadoop, machine learning, whatever really) but really there are like 10-20 different languages in the ecosystem that are all viable candidates.
Currently I'M coding on Zend Framework 1 under PHP 7.2, it's definitely more pleasant than PHP5.anything.
The docs don't help a lot in my experience, common sense goes a far way though; always use type hints, always use classes, wrap any PHP native functionality up if it has nasty stuff.
If you want to create a new project you can use PHP, it's usually not a bad choice if you want to use a cheap shared host or want to integrate with wordpress or similar.
That said, if you plan anything complicated I severely recommend picking literally anything else (sans Node.js, with which I have other axes to bury)
There is such a wide range of apps among visitors to this forum. It's hard to recommend something without more background. For me, PHP is all right.
But I don't follow the elite crowd's advice. I don't write classes. I don't use frameworks or routing functions, like get('/' function () { ... }). In fact I try to write as little PHP as possible. I think of it as a glue language between the browser and the database.
I spent a lot of time learning SQL inside and out, and I use one of the best databases in the world, Postgres. I try to do as much processing in the database as I can. So PHP's job is mainly to select, insert, update, or delete on a view, or to call a database function. The data that PHP gets from the database is often ready for display. So PHP just needs to wrap it in a template.
$data = $db->query('select * from some_view where id = ?', [ $id ])->fetchAll();
render('path/to/template.hbs', $data);
That's a super-simple example that won't even run, some cross between PHP and pseudocode, but it gives you a hint.
I just haven't let go of the newb's approach that URLs map to files.
https://www.example.com/widgets/detail/1234
would map to /path/to/docroot/widgets/detail.php. With Apache, I don't even have to use mod_rewrite to get rid of the .php suffix and to allow /1234 as PATH_INFO. I just turn on MultiViews.
I say the following as someone who tripped over PHP around 2008 and, because it worked for most of the problem areas I threw at it, I didn't try and learn anything else.
The following is an honest appraisal from about ~10 years of gently smashing my head against a soft wall. ("I don't have a headache, but I do wonder how much I've dislocated...")
Here's the thing. PHP is fast. Really fast. I custom-built a minimal copy with no extensions builtin, cutting the startup+shutdown time to nearly nil:
$ time php -r ''
real 0m0.014s
user 0m0.006s
sys 0m0.007s
THAT ^ is on a 32-bit single-core Pentium M!
The distro-shipped copy of PHP on my Core i3 box is _literally slower_!!
$ time php -r ''
real 0m0.017s
user 0m0.013s
sys 0m0.003s
(The two benchmark times are consistent.)
The Pentium M box is the machine I'm using the most at the moment (a ThinkPad T43), and my standard workflow basically resolves around repeatedly tinkering and hitting ^S ad infinitum, as that reruns my code (inotifywait is an amazing thing). So since I'm launching the PHP CLI a hundred times an hour, fast iteration times are strongly preferable.
And get this.
I have a standard debug file ("d.php") I include by default in all my code, which gives me a d() function that parses the source of what I'm editing to extract variable names. For example, `d($x);` produces something like "test:13: $x: [ {stream#42} ]", for an array containing a stream descriptor.
When I hit ^S with this standard-for-me file included at the top, PHP has to
- launch itself
- tokenize ("oh an included file, let's open and tokenize that too") and then compile
- start interpreting
- hit the first d() call
- my custom tokenizer kicks in (I am aware of token_get_all(); it doesn't work properly) and parses the entire source file
- d() prints whatever
- the code I'm working on executes and does whatever else, maybe it does a few more d() calls
- the code deliberately crashes itself at some point I want it to stop (via another call, z(), because "die" throws the stack away before PHP's shutdown functions run so you can't get line number info from die statements)
- PHP shuts down.
"0.02user 0.00system 0:00.03elapsed"
30 microseconds, to do ALL OF THE ABOVE...
...On my 11-year-old Pentium M laptop (featuring a 5400RPM HDD!).
On this machine, Go takes over a second to build "Hello world". I (laughs) haven't even (ahaha) _tried_ Rust.
PHP has, sadly, literally spoilt me.
I am honestly - honestly - not looking forward to the day when I have to tackle something that requires a consistently long build cycle. To me, at this point, that's anything over half a second. (I've only poked programming as a hobby, but I wonder if this is why I'm still sane.)
I get uncomfortable when my code takes >200ms to compile and start thinking, for example. I hate it when I have to repeatedly wait for my program to get to the point I'm iterating on. And I'm not in my comfort zone (and very easily distracted) when my script has to do a time-consuming sequence of steps to get to the point where I'm tinkering with it. (For example if I need to parse something from a network I usually dump the request that comes back to a file, jump to the top of my script, write the parsing code there using the file contents as a reference, then move the fragment into the right place.) To clarify, this doesn't make me anxious, per se, I just get fidgety and am very likely to wind up reading HN or something. It just completely throws off my concentration. (For example right now I'm in a slightly noisy environment with a lot of activity, and even this is less distracting than slow build times are.)
So, when it comes to fast iteration, PHP kind of wins. __It's not magically faster than every other language on the planet in terms of VM overhead__, however; it just happens to have fast warm-up time. But, in the sad state of language development nowadays, most "cool" and "hip" languages have questionably long warmup time, and PHP is better than all of them - Rust, Go, whatever - for me, because it makes it easier for me to concentrate than other languages do.
So. That's what I like about PHP, and where I get the impression that maybe I should keep using it.
Here's where things go wrong.
Every time I try and do something in PHP, I have an unfortunate tendency to leave a trail of bugreports and/or "O.o?!"s on StackOverflow in my wake.
After some hesitation, I decided not to use PDO, and to use PHP's SQLite3 extension instead, based on the conclusion that since the SQLite3 extension was surely simpler, it would probably be a bit faster.
All went well. Until my single-object transactions started returning multiple rows. Duplicate rows, in fact, which were even showing up when I manually inspected my database via `sqlite3 ... .dump` at the commandline. Wat.
After carefully inspecting the code and then creatively asking Google for help I finally stumbled across the PHP bug that reported the exact behavior I was experiencing... in 2013.
https://bugs.php.net/bug.php?id=64531
Then there was the time I tried to do socket programming, and my script entered an infinite loop. That one took me quite a long time to figure out, so I ended up documented what was missing from the manual for the benefit of others here: https://stackoverflow.com/questions/39410622/detecting-peer-...
The TL;DR here is that socket_recv() is Speshul™ and returns snowflake-flavored error codes that are basically broken.
PHP's network I/O is a disaster though.
Unfortunately I don't have something I can cite for this example, but I vaguely recall working on something some years ago and realizing that the socket_* functions provide no way to sanely trap all possible errors that can happen with a socket, but that the stream_* functions generally do. Amusingly, the stream_* functions provide no way to intercept all possible obscure/esoteric errors that can happen when creating a socket, but that the socket_* functions do. And then I discovered the socket_import_stream() function... but realized that the moment I turned the "socket" into a "stream" I would lose a bunch of additional error-reporting I needed that the socket_* functions provided that the stream_* functions didn't. I unfortunately don't remember exactly what the details were with this, but I do recall going round in circles and then giving up because I realized it was an unsolveable problem.
Quite a bit more recently, I was doing some poking around with running child processes using proc_open(). (Yeah. I'm insane. :D)
This turned out to be amusing.
The first thing that happened that indicated something was going wrong was... nothing. I was staring at a frozen terminal. But something had abruptly started using 100% CPU as soon as my script started, and then the CPU usage dropped back to <10% (Chromium. Pentium M. Enough said) when I hit ^C on PHP, so...
strace time. strace showed that something was going horribly wrong and that I was getting a trillion EPIPEs a second. Oh, my code was trying to write to the wrong file descriptor at the wrong moment. Cool, mental modeling glitch, easily fixe--wait. WHY DID I HAVE TO USE STRACE TO FIGURE THAT OUT.
[Some Google Later...]
...Ah, it's because PHP is ___physically incapable___ of reporting EPIPEs and EINTRs. stream_select() has no awareness that these errors can happen, and it will just keep retrying infinitely.
If only there was a socket_import_stream() function, I could pull streams from anywhere into the socket_* functions to do I/O on them - as fiddly and fragile as the socket library is, it actually reports errors correctly!!!
Anyway. Back to proc_open(), which returns a set of stream-flavored file descriptors corresponding to the child process's stdin/stdout/stderr.
My script was essentially a the PHP-ified equivalent of "cat file | process". Of course, it took about 100 lines to express this: opening the file... reading a bit of the file... stream_select()ing... writing a bit of the file to stdin... reading stdout... you get the idea.
Except "the idea" isn't what actually happened. What happened was that the script did the "read a bit of the file..." bit a few hundred times and then ran out of memory. I carefully examined my code and the select loop was, most definitely, written correctly. ALL OF EVERYONE'S WAT?
It took asking in ##php for help to figure it out. This is already long so I'll let this nice bugreport continue this conversation: https://bugs.php.net/bug.php?id=75584
The bugreport is very new (2-3 days old, haha!) so it has no replies yet. Probably won't for some time. You know, I could literally place a legitimate wager on whether the next qualitative reply in that thread is "ah, I see, let's see about fixing this" or someone describing how many circles they went around for however many weeks before finding this bug that perfectly explains the situation they're in.
EDIT: One last thing. Nearly forgot this! I was doing some tinkering with PHP a while back and decided to be cute and add visual extended-validation printing into some code I was writing, so if a site had an EV cert it would show the company name on the console.
This went horribly wrong, as I discovered that CURL's TLS info-extraction functions are horribly, horribly broken: https://bugs.php.net/bug.php?id=71929
This is unfixable.
I stop nao.
But I've demonstrated that PHP
- has bugs writing to databases
- cannot sanely perform network I/O
- has sanity-checking issues dealing with TLS
I think I have said enough.
I can has new REAL programming language... that's _faster_ than PHP? :D
Oh, two more things, as a footnote -
One, the reason why token_get_all() doesn't actually work is that I wanted to be able to arbitrarily malform my d() calls - "d" string on one line, opening parens three lines down with a bunch of newlines and tabs in between, pile of arguments shoved in the middle, etc - and I wanted the reported line number to be for the opening parenthesis. token_get_all() doesn't report line numbers for certain tokens - including parens.
Two, a couple days' headscratching helped me figure out PHP's build system. The following is not documented anywhere as far as I know. If you go into ext/ inside the PHP source folder, running 'find | grep config0.m4' will show you all the extensions that _must_ be compiled in. Currently this is just four things (libxml, streams, I forget the others). Every other extension ('find | grep config.m4') can optionally be built as a module by disabling it at PHP build time (via --disable-all, or manually disabling the extension) and later going into the extension directory and doing the "phpize; configure; make install" dance.
> I can has new REAL programming language... that's _faster_ than PHP? :D
Have you tried node.js 8 or 9?
There's also duktape but I don't know what APIs does it have.
$ time nodejs test.js # second run
hello world
real 0m0.066s
user 0m0.058s
sys 0m0.009s
$ time duk test.js # second run
hello world
real 0m0.002s
user 0m0.002s
sys 0m0.000s
edit: duktape has a very limited API built in, and it seems one needs to add any extra function in C. It doesn't seem to have file access. Node.js is very fast to me though.
The problem with super-minimal environments like Duktape, PicoC, Lua, mruby, TinyPy, etc is precisely what you describe: there's no batteries-included behavior.
I am seriously considering acquiescing to the performance impact of Go build times (yup, it's slow, who woulda thunk) because of the sane-batteries-included nature of the language.
I forgot to mention my formerly favorite language: Python. It's _much_ faster than Node.js in startup and relatively basic things. Node wins on almost everything I make, though.
$ time python test.py # second run
hello world
real 0m0.009s
user 0m0.009s
sys 0m0.000s
$ time node -e ''
real 0m0.098s
user 0m0.084s
sys 0m0.010s
Of course on my desktop it's a
$ time node -e ''
real 0m0.002s
user 0m0.000s
sys 0m0.000s
tiny bit different.
Python on my laptop is just slow enough that I really notice it:
$ time python -c ''
real 0m0.032s
user 0m0.022s
sys 0m0.009s
Of course PHP is all 0.014, 0.010, 0.012, etc.
Incidentally, this T43 is a backup machine I'm using while my desktop is on indefinite loan to a family member after their laptop broke. This will be fixed eventually; I'm not sure how.
But I've discovered that this old machine is a remarkably good performance catalyst; something that runs blindingly fast on this machine will run really, really well on a faster box - and the thing is, if I write stupid or inefficient code on this older laptop, I'll notice I'm doing it wrong sooner, because it takes less to make this machine fall over from inefficiency.
If only I could tell the above to the Chromium team, though... sooo many Chrome issues... (and Firefox is unfortunately slower on old hardware than ever before! >.<)
It seems almost all the time node.js runs hello world is spent loading its own modules. You should try something like this:
// watch.js
var file = process.argv[2]
console.log('press enter to run '+file)
process.stdin.on('data', function(){
var [s1,ns1] = process.hrtime()
for(var k in require.cache) {
delete require.cache[k]
}
require(file)
var [s2,ns2] = process.hrtime()
var s = (s2-s1)+(ns2-ns1)*1e-9
console.log('time: ' + s.toFixed(9))
})
Result: less than 1ms on first run
$ node watch.js ./test.js
press enter to run ./test.js
hello world
time: 0.000952656
hello world
time: 0.000284171
edit: script now deletes all cache, not only the passed script (result is the same if the script doesn't require others)
It's possible "php -nr ''" might be faster; -n stops loading php.ini, which is what points to all of the modules PHP ships with.
> These startup metrics are also super arbitrary anyway, as in web servers code loading is behind your app server anyway (uwsgi, php-fpm, whatever).
This is true; but only about 1-5% of everything _I_ do (as a hobbyist) faces a webserver, and then the things that do require a webserver are simplistic enough that I can write said webserver in bash and run it from socat.
Yes, strange for a bug fix to be top of a list of features, however...
I don't agree that it's "not the most eventful" release:- libsodium being included into the core is a big tick in the box in terms of maturing as a language; modern - secure - cryptography out of the box.
I'm surprised PHP hasn't had this for a while. Isn't PHP nearly 20 years old? Is it common for web languages to grow so old before they get crypto in the standard library?
You are right, of course, but nonetheless this is a significant step forward for the language (and the ecosystem as a whole).
I do hope that - just like yourself, I am sure - the next significant step is more organically and proactively developed, and not treated (at least in the wider communtity's eyes) as a belated addition to a bewildered leviathan.
Ok, but that's different from including crypto at all. I'm surprised Go uses those algorithms in its standard library TLS stack, but they're not made available as part of the standard crypto packages...
OpenSSL and Mcrypt have been available for the longest time, I don't know if that counts if we word-lawyer "Standard Library" but it definitely felt like that way.
Yes. HHVM has now explicitly diverged from PHP, so compatibility with PHP 7 code is not guaranteed, and so too any given PHP 7 library.
If you work for Facebook (or Slack), this is not a problem. You still have talented engineers working to improve the language. But there also isn't the guarantee that those improvements will benefit the average use case.
Also PHP is available everywhere. HHVM, not so much.
This has been a huge disappointment for me. Hack seemed really promising, but some of the decisions they've made really seem to have strangled any hope of there being much of an ecosystem outside a few companies that are heavily invested in it.
I like Hack as well, although just as a hobbyist, but it doesn't seem to have gotten a lot of traction. Hosting options appear limited, and support for HHVM seems to be decreasing, rather than increasing. Many developers seem hesitant to use anything tied to Facebook after the React patent license controversy.
Hack's biggest success thus far seems to be serving as a wakeup call to PHP itself. What it needs as a language is some version of PHP's ubiquitous hosting infrastructure, preferably one that supports calling Composer with HHVM and allows XHP as well, and actually gets updated and maintained (I'm looking at you Openshift.)
A very large percentage of the popular packages have ceased to concern themselves with Hack compatibility, so as the languages continue to diverge, the benefit of Hack taking advantage of the PHP package ecosystem will decrease.
However, starting a web project with Kotlin in 2017 has a minimally larger set-up cost (have to understand gradle or kobalt) than PHP, but is definitely a better option. Check out http4k or ktor and squash (orm)
Previously you could only specify class names and interfaces, now you can specify object (any object of any class, including StdClass). Mostly only a way to type hint "not array or any scalars". The class names has been around a while and is usually way more useful.
No more Mcrypt, less cases of having to use OpenSSL and using it wrong. This is a huge security step towards the future.