HowTo - Creating dump files from your database

You recently bought your first cloud database and are eager to start utilizing it, so you want to load a lot of data to it. So, here is a quick guide on how to accomplish it. Because pushing a lot of data can mess up the database, we have included this for your convenience.

 

Creating the Dump file without data

We need to start by dumping the old data. There are several ways to do this, but we advise being selective so that not everything gets pushed at once.
To begin using this kind of operation, we first create a dump structure devoid of any data.

mysqldump -u <user> -p --no-create-db --no-data <database>

By using this command, you instruct MySQL to create a file that contains the database's table structure but no actual data. As a result, your import is moving along much more quickly. In the next steps, we'll create dump files for each table to speed up the import.

 

Creating the Dump file per table with data

We've simply given MySQL a schema so it understands what's coming. Let's get started with the tables. Rather than pushing all the tables and rows at once, we're going to do it table by table. To do this, we must first obtain the dump file for each table.

mysqldump -u <user> -p --no-create-db --no-create-info <database> <table>

Because we already have the table, the option --no-create-info instructs MySQL not to create it. If you don't choose this option, the current table will be dropped before a new one is created. As a result, all data you had in the table before is lost.

 

Creating the Dump file for large tables

The database you use has a table with over 10.000 rows of data. This is a large amount of data for a database to handle all at once. And since we don't want the database to malfunction, we can simply run the following command to obtain the dump file of particular rows. Keep in mind that you must continue doing this until the entire table is generated in separate files.

mysqldump -u <user> -p --no-create-db --no-create-info --where="column='<string>'" <database> <table>

We are able to choose the column by telling it what to grab because we provide the where option. You can use --like= or a command-line equivalent to import into the dump file. Instead of dumping all 10.000 rows at once, it is preferable to be more specific to the dump file.

 

Was this article helpful?
1 out of 1 found this helpful

Comments

0 comments

Please sign in to leave a comment.

Articles in this section