Logical data modeling is a method to discover the data, relationships, and rules of business, collectively called the business rules. It is the first step in the design of data management process and stands as the basis of physical data modeling, which concerns with the aspects of physical development of the database
The work for logical data modeling usually begins in the requirements analysis phase, directly when the project team studies the business requirements. From the initial requirements and after subsequent detailed analysis, the systems analysts build an initial data model for the representation of data and processes of the company. Over time, on the transition between the systems analysis phase and systems design phase, the data model is improved and gets other details. Finally, on the systems design phase, the data model is established in an ultimate version, and changes to it need confirmation from both the client and the project team. Changing the data model in the later phases of development or testing is not a good thing at all, especially when it comes to relational databases. Thereby, the logical data model should be defined since the beginning of the system’s development and need not be changed later.
Relational databases, as indicated previously, provide a lot of advantages regarding data integrity, consistency, transaction management, which are vital points in this project. The majority of modern applications need to be able to retrieve data in the shortest time possible. And that’s when you can consider denormalizing a relational database. In this project, we have denormalized focal points of the databases in order to optimize performance and improve data retrieval.
Normalization is the process of assembling data in an organized manner to eliminate redundancies, in other words, denormalization can be thought of as the process of putting one fact in numerous places. This can have the effect of speeding up the data retrieval process, usually at the expense of data modification. Instead of trying to denormalize the whole database, in these automation project we have been focused on particular parts in order to speed up the process of document generation. However, developers should use this tool only for particular purposes.
We have used the denormalization in the cases that are needed to execute a calculation repeatedly during queries, it’s best to store the results of it in the master table. Also in cases that a normalized database requires joining a lot of tables to fetch queries, we have added redundancy to databases by copying values between parent and child tables.
Table of contents: