Database strain?

PythonPython Forum LeaderThe Royal RAM
Lets just say for example if I had a MySQL table and inside it it had 500,000 records each with a unique 16 digit number. Would a PHP script (or the server to be more precise) struggle within finding a specififc number and would it be slow?

Also what if for example the table held over a million records... any performance issues?

The Royal Ram

Comments

  • danielneridanielneri WP V.I.P. VPS - Virtual Prince of the Server
    i think it might have some...or its just my host...
    ban1.gif
  • PythonPython Forum Leader The Royal RAM
    If it does effect performance then to what extent? By this I mean could it add an extra 10 seconds or so onto script loading time?

    The Royal Ram

  • danielneridanielneri WP V.I.P. VPS - Virtual Prince of the Server
    well technically i would think the script would take atleast that time for a million rows in a table...thats deadly you know...inefficient
    ban1.gif
  • PythonPython Forum Leader The Royal RAM
    so how would it be done? Would the data be split up across more than one table?

    The Royal Ram

  • danielneridanielneri WP V.I.P. VPS - Virtual Prince of the Server
    you could possibly create a script that splits the table into different tables...a suggestion...to lessen the load on 1 tb
    ban1.gif
  • ChroderChroder Senior Member The Royal RAM
    Should be okay. 500,000 is not that large for a DBMS. For example, you see vB forums with 5 million+ posts, which should give you an idea.

    What you need to watch for is the amount of times you access the table. If you have too many read/writes going on all the time, then you'll run into load problems.
  • PythonPython Forum Leader The Royal RAM
    Ok so what about one read and one write each time the script is run...

    The read part simply finds the record and the write just updates it...

    The Royal Ram

  • ChroderChroder Senior Member The Royal RAM
    Well doing 50 reads and 50 writes in one script can be handled fine, that's not really the problem. It's the amount of users that are doing it all at once. One read and one write should work well even under high traffic, though.

    When you start looking at insane amounts of traffic (like Google or Yahoo, or even "smaller" sites like Digg), you need to start looking into how to split up the load and decreasing the amount of work you need to do (using slave servers, caching etc).

    You probably won't need to worry about server load with working with 500,000 records and running simple find/update queries. If multiple users are going to be requesting the same kind of record (ie. the script is trying to fetch and update a record at the same time as another user), it's smart to keep the updates for later. Perhaps write a script that logs the updates to a file, and uses a cron job to update the database every hour.

    There's all sorts of factors to look at :)
  • PythonPython Forum Leader The Royal RAM
    Well I highly doubt that this script will receive the sort of traffic which Yahoo and Google receive so that shouldnt be a problem :)

    Thanks

    The Royal Ram

  • PythonPython Forum Leader The Royal RAM
    By the way... in case you were wondering what its for... www.thebigcount.com :)

    The Royal Ram

Sign In or Register to comment.