Efficiently and fast querying is a most important
challenge in real word for modern database systems.  Fast data is becoming crucial
and asset for researchers. In this study we
investigate the following problem: how can we generate a high-quality query
plans which runs fast, in a result we can minimizing the response time and
query run more fast also we will discuss and descried cast model in different dimensional
data space.

There are many query planning techniques
are presented and also there are several techniques for cost model are
presented. We will discuss pattern matching over compressed graph1, Scalable Pattern Matching over Compressed Graphs via Dedensification distributed
SQL query execution on multiple engine2 MuSQLE:
Distributed SQL Query Execution Over Multiple Engine Environments, compare column, row and array data base management system to
process queries, cost model for neighbor search and real time processing
techniques and cost model in high dimensional data for query processing3 Comparing
Columnar, Row and Array DBMSs to Process Recursive Queries on Graphs. We will discuss different factor on query planning and review
the related cost models describe by the different author form all over the
word.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

2-HISTORY OF QUERY PLANNING AND COST  MODELS

For data processing the most popular
platforms on the cloud are based on MapReduce presented by Google. On top of
the MapReduce, Google has also build a systems FlumeJava, Tenzing and Sawzall.
To write data pipelines FlumeJava is a library is used MapReduce jobs are
transformed into it. Over big datastes Sawezall
can be expressed that is a scripting language. To minimize the latency Tenzing
is a analytical query engine that is used to pre-allocate machine. Hodoop is
the main implementation source of MapReduce by Yahoo. For facebook Hive is a
wherehouse solution. The query language of Hive (HiveQL)  a subset optimization techniques and SQL are
limited to simple transformation rules. Our optimization goal is to maximize
the parallelism and minimize the No of MapReduce jobs and minimize the
execution time of the query.