开发者

Function file with 2,000 rows adding to each page. Is it ok?

I have PHP function file with ~2,000 rows and I always include it for each page. Do those many rows make my site slower? Should I separate function file to different files? T开发者_StackOverflowhank you.


What do you have 2k lines of code that you must run on every page? Depending on what it's doing (processing / database calls / etc) it could significantly slow down the site, not to mention also clutter up the application itself.

If I'm understanding you right there is simply a bunch of functions? If so, then they aren't getting run unless you call them, and as such won't slow it down noticeably.


If they're only functions inside that file, then it shouldn't matter. The functions aren't parsed until you actually use them...although they are validated for errors, so that may take some time.

And if by 2000 rows, you mean 2000 lines, then you shouldn't worry too much, but yeah, it is ideal to separate similar functions into different files, but if you're going to include all of those files anyway, you're just adding to your overhead with more include calls


Any number of additional lines will slow things down to some small degree, but it's usually not enough to notice. And 2000 lines isn't a lot -- for one file it kinda is, but most frameworks include that much without batting an eye, and they run fine.

Don't worry about it unless the site is actually slow -- and consider algorithmic improvements before worrying about something like whether PHP is parsing too much.


It would have some effect, but it would be very minimal and more than likely it will be nothing noticeable.


It comes down to how fast your server's disk system is, v.s. how fast the CPU is.

Disk speed will determine how fast those 2000 lines can get found on the disk and slurped into memory and fed into the PHP parser. CPU speed will determine how fast the PHP parser can process the 2000 lines (and I'm ignoring other factors such as memory speed, we'll just pretend the computer is just a cpu with a disk).

The only way to say for sure is to benchmark it. Maybe one big file is far faster than multiple smaller ones on your particular development server, but the exact opposite on the production machine.

So, let's fake up some numbers. Assume the filesystem and physical disk together take a constant 0.5 seconds to locate any file on the disk, and 0.2 seconds "per 1000 lines" to load it. Then let's assume that PHP's parser has perfect performance and takes exactly 0.1 seconds to parse 1000 lines, with no startup overhead. Of course these are ludicrous numbers, but it's just for illustration

So, with your 2000 line file, we end up with:

0.5seconds to locate on disk
2000/1000 * 0.2 = 0.4 seconds to load
2000/1000 * 0.1 = 0.2 seconds to parse
= 1.1 seconds to make your 2000 line file available for use.
    = 0.5 seconds to locate
    = 0.6 seconds to load/parse

Now let's say you've rejigged things and split the file into smaller chunks. You start using one particular script heavily, which requires 3 of those smaller scripts to be loaded. Let's pretend those smaller chunks are all 500 lines each. So.

0.5 seconds * 3 files = 1.5 seconds to locate on disk
500/1000 * 0.2 * 3 = 0.3 seconds to load
500/1000 * 0.1 * 3 = 0.15 seconds to load
= 1.85 seconds, much slower than the single bigger file.
     = 1.5 seconds to locate
     = 0.45 seconds to load/parse

So, with this contrived example, you've reduced your loading/parsing overhead by 0.15 seconds (25% reduction), but you've TRIPLED the disk time.

So, again, the only way to say what'll work best in your situation is to benchmark it. Run a series of loads with the single large monolithic file, versus a series of loads with multiple smaller fragments.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜