Solved MySQL Problem :S

Discussion in 'Plugin Development' started by Lubenica007, Jun 10, 2013.

Thread Status:
Not open for further replies.
  1. Offline

    Lubenica007

    Hello,
    First, thanks for reading this :D I am writting a statistics plugin and i have a problem with MySQL. So, my plugin is updating data to MySQL database on every block break. (I know it is not so mega super okey but I don't know how to make an caching plugin because I have just started with using MySQL in Bukkit plugins, any help will be appreciated :) ).

    Here is the code for updating data:

    Code:
    public void Register_BlockBreak(Player player) {
            try {
            Connection conn = getSQLConnection();
            Integer blockBreaks = 0;
            Statement st = conn.createStatement();
            ResultSet rs = st.executeQuery("SELECT Blocks_Broken FROM `breaks` WHERE Nick = '" + player.getName() + "'");
            while(rs.next()) {
                blockBreaks = rs.getInt("Blocks_Broken");
            }
            int newBreaks = blockBreaks + 1;
            st.execute("replace into `breaks` (Nick, Blocks_Broken) VALUES ('" + player.getName() + "'," + newBreaks + ")");
            } catch(SQLException e) {
                e.printStackTrace();
            }
        }
    And here is the code for calling the void Register_BlockBreak (It's in another class):

    Code:
    Bukkit.getScheduler().runTaskAsynchronously(plugin, new Runnable() {
                @Override
                public void run() {
                    plugin.db.Register_BlockBreak(player);     
                }
                });
    My problem is that when i fast broke for example 15 blocks it will update to table that I have broken (for example) 8 blocks. If I remove the Async Sheduler it will work but the timings would be too high :S

    I hope I will get help from more experianced developer than me :p I'm good at developing but I've just now started with MySQL and Async task. Thanks for reading this!
     
  2. Offline

    chasechocolate

    Another way you could do it is store the amount of blocks broken in a variable (HashMap perhaps) and query them to the MySQL table every few minutes or so in an async repeating task.
     
  3. Offline

    Rprrr

    Lubenica007

    Shouldn't you add " WHERE Nick = '" + player.getName() + "'" " here? It might change all 'Nick' and 'Blocks_Broken' values (at least, if I'm correct. Please tell me, I'm quite new to SQL so I'm trying to learn).

    "replace into `breaks` (Nick, Blocks_Broken) VALUES ('" + player.getName() + "'," + newBreaks + ")"
     
  4. Offline

    Lubenica007


    Thank you for your reply. I have heard for HashMap perhaps and I will learn about it later but now I want to solve this bug :S So, if I break blocks slowly, it will write database with correct data, but if I break them fast it won't work.


    Nope, this is working fine. Look here: http://blogs.coldbuffer.com/inserting-or-updating-records-using-mysql-replace
     
    Rprrr likes this.
  5. Offline

    Lubenica007

    BUMP. Please help me :S
     
  6. Offline

    CubieX

    Doing the "replacing" async might look like it's a good idea.
    And basicly it is.
    But as you are updating a single data record by multiple async tasks, this is not as simple as it seems.
    If your async tasks are executing slow, (depends on various factors and is not predictable, because, well, it's async ;) ) it's possible
    that a task started later comes first to write his value to the DB, while an earlier started task just finished his SELECT query, but has not yet updated the DB with it's new value.
    So those updates will not necessarily happen in the same order as your async tasks have been created!
    And this will mess up your data record and cause wrong data.

    That means: If you are going to access the same data record by multiple async tasks,
    you have to impement a mechanism that will ensure, the data of a player is updated, without having parallel tasks poking around in that very same data record at the same time.
    (means: You have to manually "sync" access to your data record somehow.)

    One way to do this, is to cache those block breaks a while for each player, and then execute your async task to update the values.
    And while your async task for a specific player is running (check it by its TaskID != null in the BlockBreakEvent handler),
    do not let another async task be created.
    Cache those data and create a new async task for this player at the earliest, when your current async task has been canceled.

    Updating different players data at the same time with async tasks is fine.
    But there may only by 1 async task for each players record at a time.
     
    Lubenica007 likes this.
  7. Offline

    LucasEmanuel

    If you are lazy like me, you could utilize the built in thread-synchronization in Java and have all threads share access to the same object. This is a rather messy but simple way of ensuring that only one thread at a time have access to the database.

    Basically what I did was create a single instance of this object and let all threads share access to the database through it. The object only allows one thread at a time talk to the database. ;)
     
    Lubenica007 likes this.
  8. Offline

    Lubenica007

    Thanks guys for your help guys, you helped me a lot :) But one more question... I have made a cache system for my plugin but I dont know how good it is it...

    This code is in one class (BlockBreak):
    Code:
    if(!Breaks_Cache.containsKey(event.getPlayer().getName()))
            {         
                Breaks_Cache.put(event.getPlayer().getName(),1);         
            } else
            {
                Breaks_Cache.put(event.getPlayer().getName(), Breaks_Cache.get(event.getPlayer().getName()) + 1);
            }
    Hasmap is created with this code:
    Code:
    public static Map<String, Integer> Breaks_Cache = new HashMap<String, Integer>();
    This code is in second class (for writting to database):
    Code:
    @SuppressWarnings("rawtypes")
        public synchronized void Update_Stat(final String stat, final String nick, final int broj) {         
            try {
                Connection conn = getSQLConnection();
                int blockBreaks = 0;
                Statement st = conn.createStatement();
                //Block Break
                if(stat == "BlockBreak")
                {
                 
                    ResultSet rs = st.executeQuery("SELECT Blocks_Broken FROM `xStats_Data` WHERE Nick = '" + nick + "'");
                    if(rs.next() && rs.getString("Blocks_Broken") !=null)
                    {                     
                        blockBreaks = rs.getInt("Blocks_Broken");
                        int novi = blockBreaks + broj;
                     
                        st.executeUpdate("UPDATE xStats_Data SET Nick='" + nick + "', Blocks_Broken='" + novi + "' WHERE Nick='" + nick + "'");
                        Iterator<Entry<String, Integer>> iter = BlockBreak.Breaks_Cache.entrySet().iterator();
                        while (iter.hasNext()) {
                            Map.Entry mEntry = (Map.Entry) iter.next();
                        BlockBreak.Breaks_Cache.put((String) mEntry.getKey(), BlockBreak.Breaks_Cache.get(mEntry.getKey()) - broj);
                        }
                    } else
                    {
                        st.executeUpdate("INSERT INTO xStats_Data (Nick, Blocks_Broken) VALUES('"+ nick +"','"+ broj +"')");
                    }
     
                }
             
            //Ispisi error ako ga bude             
            } catch(SQLException e) {
                    e.printStackTrace();
                }     
    }
    And this code is in main class for calling the class ^^ for updating data:

    Code:
    getServer().getScheduler().runTaskTimerAsynchronously(this, new Runnable() {
                @SuppressWarnings("rawtypes")
                @Override
                public void run() {
                    //Block Break
                    Iterator<Entry<String, Integer>> iter = BlockBreak.Breaks_Cache.entrySet().iterator();
                    while (iter.hasNext()) {
                        Map.Entry mEntry = (Map.Entry) iter.next();
                        db.Update_Stat("BlockBreak" , (String) mEntry.getKey() , (int) mEntry.getValue());
     
                    }
                 
                             
                }
                }, 20 * 5, 20 * 5);
    Is this good? Can I somehow optimize my code?

    Thanks a lot! :D
     
  9. Offline

    metalhedd

    instead of doing a select and a replace every time, why not do:
    "UPDATE xStats_Data set BlocksBroken = (BlocksBroken + ?) where nick = ?" then you dont have to worry about 2 tasks attempting to update at the same time, the database ensures atomic operations.

    also, you should be using PreparedStatement's

    EDIT by Moderator: merged posts, please use the edit button instead of double posting.
     
    Last edited by a moderator: Jun 1, 2016
  10. Offline

    Lubenica007

    Okay, thanks, I will try that method later. So the cache system is good?
     
  11. Offline

    metalhedd

    I'm not sure, I didn't read it SUPER thoroughly, but it looks like it will have a problem because a HashMap is not threadsafe. you might be editing a value on the main thread while your async task is trying to read it. Use a ConcurrentHashMap instead, its the same thing, but threadsafe :)
     
    CubieX likes this.
  12. Offline

    Lubenica007


    Is this correct?:
    Code:
    public static ConcurrentHashMap<String, Integer> Places_Cache = new ConcurrentHashMap<String, Integer>();
    Thank you very much :)
     
  13. Offline

    metalhedd

    looks good to me :)
     
    Lubenica007 likes this.
  14. Offline

    CubieX

    I don't know if this will automaticly place write and read locks on the table. I guess this depends on the used database engine (InnoDB, MyISAM...) and it's configuration.
    Because this will only help to solve the problem, if the table or at least the accessed row is locked for both actions.
     
  15. Offline

    metalhedd

    are you sure about that? I'm not 100% positive, but I think you're wrong. Atomicity is a fundamental principle behind a RDBMs.
     
  16. Offline

    CubieX

    metalhedd
    No, I'm not sure.

    But I'm also not sure if this problem can be solved by utilizing the DBs mechanisms alone.
    The OP uses a SELECT query and afterwards a replace (or meanwhile probably an UPDATE) query
    in each async task.
    So the DB cannot know if taskA which just has read a value is the same task that tries to update the
    same record next.
    It could be taskB, which has read the table before taskA did. Maybe taskC just updated the record between
    taskA's and taskB's reading operations.

    Understand what I mean?
    The core problem is still there. Even if the DB engine ensures atomic operations.
    For the DB all access will happen one after another. The engine will ensure this.
    But tasks may be interrupted at any time for the sake of another task.
    And maybe this one will also try to update the record and may be able to finish it's actions earlier. -> Boom!
    So for the plugin, the order won't be defined, unless the programmer handles this explicitly.

    Perharps using Transactions (possible with InnoDB engine for example) could help here to lock the table until both queries are executed.
     
  17. Offline

    metalhedd

    my solution elimited the SELECT, there is only one query being executed, and it would be unaffected by concurrent runs because each update is atomic. one value may get added before the other, but that order has no impact because nobody is reading the value mid-write.
     
  18. Offline

    Lubenica007

    Marking this thread as solved. Fixed the bug by creating cache using ConcurrentHasMap and writing to the database every 10 seconds with prepared statements. Thank you all for your time and for your help!
    Regards :)
     
Thread Status:
Not open for further replies.

Share This Page