Hadoop based File System Revisions with Derby and Virtual Local File System Models
Hadoop is commonly used to handle bulk data storage and processing, which hosts the storage and processing aspects of the big data-based scenarios. Big data is a term where data itself is becoming an important point in the storage and processing of the data. The current article describes the Hadoop eco system along with common file system can be achieved with Hadoop based block storage and the processing mechanism with parallel and distributed dimensions known as MapReduce (MR) model. The current article provides a new way of allocating the file blocks which provides a solution to the pitfalls in the existing mechanism. The importance of the work is a derby model which exclusively meant for better utilization of the file blocks without loss of generality. The other striking feature is to propose a virtual method of using the file systems in the context of virtual and localization. The outcome of the work is to efficient and effective usage of file system of Hadoop by improving the allocation of the blocks and usage of local file system space along with the distributed file system. We believe that this work addresses the challenges in the Hadoop file system management in a novel methodology by introducing the concepts of Derby model and Virtual Local System Model (VLFSM).