Volume 2, Issue 10, October 2010

Hybrid Licensing and Dual Activation of Desktop software [ Full-Text ]

Sanjeev Kumar Biswas and Kanika Dalmia Gupta

The paper outlines a new holistic licensing and product activation strategy that can be used in the current market context for desktop software products. At present, the licensing models in use for software products suffer from numerous shortcomings. As a result of which, the product is easily pirated, has limited use models, restricted product purchase options, expensive auditing procedures and are unable to work in locked down environments with no internet connection. Using the proposed solution, the software vendors can take a shot at all these issues that are plaguing the software growth. The paper presents a method and system that takes advantage of the prevalent mobile technology like Smart Phones and feature phones. Using this, the software product vendors would be able to protect the software from piracy, increase the target customer base, provide greater flexibility to the end user in terms of more features/license duration, provide a smart and convenient way of handling all licensing and product purchase woes of the end user, and in turn promote the revenue growth.  This software purchase, licensing and activation methodology is novel in its approach as it makes use of two most powerful communication channels – the internet and the mobile network, in tandem. The paper describes the system components and setup, communication protocol in use between different components, and the different use models.


Algorithms of Hidden Markov Model and a Prediction Method on Product Preferences [ Full-Text ]

Ersoy ÖZ

Markov chains are stochastic processes the knowledge of the present state uniquely determines its future stochastic behaviour, and this behaviour does not depend on the past of the process. Markov Chains are widely used in areas such as finance, education, production, marketing and brand addiction. A Hidden Markov Model is a stochastic process which is formed by adding some properties to a Markov chain. Applications are developed by using the solution algorithms of the three main problems of Hidden Markov Model. This study is an application about product preferences and the reasons for the preferences by using Hidden Markov Model based on Markov Chains. The aim of this study is to develop an estimation method about product preferences. The toolbox inside Matlab software is used for the numerical solutions of the model which is handled in the application.


The Guide of Implementing Chat Protocol: Study Case on Using the Socket Programming Concept and Object Oriented Programming [ Full-Text ]

Hamdan O. Alanazi and Rafidah Md Noor

In the past, communication among people was limited and very difficult. Nowadays, the Internet makes our life very easy. Through the Internet you can easily communicate with people in different corners of the world in a few seconds. These days, the world has been converted into a small village by the Internet. The chat room is one of the effective communication tools. In this paper, a new protocol is presented for the chat room using the socket programming concept. This protocol has been implemented using Java.


Secured Mobile Banking System with an Efficient Bandwidth and Prevent the Delay Time [ Full-Text ]

Hamdan O. Alanazi and Patrice Boursier

In the past, the banks used to do the banking manually and there were very long queue with low services for the customers. In this age of technology and with the fast growth of local mobile customers, more banks are actively competing for a larger slice of the mobile banking business. Banks which have Internet banking services are looking to further improve this relatively new service while the smaller ones are beginning to look at it more seriously. In this paper, the secured mobile banking system has been proposed to avoid the delay and to get high bandwidth and to do the banking flexibly and safely.


Studies on E Governance in India using Data Mining Perspective [ Full-Text ]

Sonali Agarwal and G. N. Pandey

The fast expansion, exploitation and propagation of the innovative and promising Information and Communication Technologies (ICTs) indicate new opportunities for growth and development. Data Mining is a well established approach of discovering knowledge from databases for the purpose of Knowledge Management. There is large number of data and information generated and collected by the different levels of governments. In case of government, proper decision making is important to better utilization of all resources. Data Mining could help administrators to extract valuable knowledge and practices out of this voluminous data, which can be used to obtained knowledge and practices for strategically reducing costs and increasing organization expansion opportunities and also detect fraud, waste and abuse. The present investigation taken Education Data related with primary education in order to analyze status of primary education in Allahabad and in Uttar Pradesh, India. Clustering and Classification methods are used to find out similarity or dissimilarity among various districts of Uttar Pradesh. This will create groups of districts as clusters so that these districts may further treated together under one policy. Classification method is based on reported Gross Enrollment Ratio (GER). In this method some unusual classification of district highlighted that the Data Mining could also establish the impact of migration from one district to another when all the students are given unique identification through social security number.


Security Information Hiding in Data Mining on the Basis of Privacy Preserving Technique [ Full-Text ]

R. Dhanapal, Gayathri Subramanian, M. R. Raja Gopal and K. Hemamalini

Data mining has attracted a great deal of information industry and in society as a whole in recent years, due to the wide availability of huge amount of data and the imminent need for such data into useful information and knowledge. The information and knowledge gained can be used for applications ranging from market analysis, fraud detection and customer retention, to production control and science exploration. With and more information accessible in electronic forms and available on the web, and increasingly powerful data mining tools being developed and put into use, data mining may pose a threat to our privacy and data security .The real privacy concerns are with unconstrained access of individual records, like credit card, banking applications, customer ID, which must access privacy sensitive information. In this paper we investigate the issue of data mining, as data shared before mining the means to shield it with Unified Modeling Language diagrams. Describing the privacy preserving definition, problem statement privacy preserving data mining technique, Architecture of the proposed work. We propose an amalgamated scaffold for Privacy Preserving Data Mining that ensures that the mining process will not trespass Privacy up to a certain degree of security.


T-Drop: An optimal buffer management policy to improve QOS in DTN routing protocols [ Full-Text ]

Qaisar Ayub and Sulma Rashid

Network architectures like TCPIP, AODV, OSPF perform communication in environments where end-to-end path must exist before the start of transmission and is not devisable in most of advanced wireless applications for example military networks , vehicular ad hoc networks, pocket switched  networks, deep space communication. Delay Tolerance network (DTN) build communication infrastructure through intermittently connected mobile nodes where each node store the message in its buffer, carries the message while moving and forward it when encounter with other nodes. To maximize the delivery probability, a node while moving replicate message copies to all encounters nodes. This iterative replication and storage of message produce congestion which is override by dropping message(s). An efficient buffer management policy to improve quality of service is required to decide which message will be dropped, when buffer run out of its capacity.In this paper we proposed a new buffer management policy which drops the message from congested buffer only if size of existing queued message(s) falls in Threashhold range (T).This technique is called as T-Drop. We prove through simulations that propose T-Drop policy minimize the message drop, hop count average, overhead while increase the delivery probability as compared to existing DOA.


New Framework for High (Secure and Rate) Data Hidden Within Quarter Space for Executable File [ Full-Text ]

Hamdan O. Alanazi and Mohd Sapiyan

The strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. Also the previously traditional methods were sufficient to protect the information, since it is simplicity in the past does not need complicated methods but with the progress of information technology, it become easy to attack systems, and detection of encryption methods became necessary to find ways parallel with the differing methods used by hackers, so the embedding methods could be under surveillance from system managers in an organization that requires the high level of security. This fact requires researches on new hiding framework and cover objects which hidden information is embedded in. It is the result from the researches to embed information in executable files, but when will use the executable file for cover they have challenge must be taken into consideration which is  the size of the cover file in the last research when we used the exe file as the cover was reached 28% from the size of cover file. In this paper, a new information hiding framework is presented. The aim of the proposed framework is to hide information (data file) within quarter space of execution file (EXEfile) to overcome the challenge that we have mentioned .Meanwhile, since the cover file might be used to identify hiding information, the proposed framework considers overcoming this dilemma by using the execution file as a cover file. As the results in features of the short-term responses were simulated, and indicated that the size of the hidden data does depend on the size of the (Unused Area1+ Unused Area 2 +header file+ Image Pages; within the cover file which is  approximately  equaled to 38% of the size of the exe.file before the hiding process.


Secure Module for Transmissions Data over Unsecured Channel: Study Case on Electronic Medical Records [ Full-Text ]

Hamdan O. Alanazi and Lim

Recently, Health care presents one of the most important subjects in the life. USA government planed to spend 100 $ billion over the next 10 years, according to experts. The Electronic Medical Record is usually a computerized legal medical record created in an organization that delivers care, such as a hospital and doctor's surgery. In age of technology, one of the most important factors for EMR is that securing the records for the patients, protect their rights and knowing the responsible of disclosure their data. Thus, the architecture design of transmission, that could guarantee the privacy of the patients, plays an important role on building a strong relationship among the medical center and the patient. Nevertheless, the design must be carried out with awareness to protect the rights of the patients and maintains the confidentiality, integrity, authenticity and non repudiation. The architecture of a secure transmission for single medical records has been descried in this paper; the author has used UML tools on the design.


Lexeme: An Ontology-Based Semantic Advertising Networks [ Full-Text ]

Lilac A. Al-Safadi, Aseel Al-Dawood and Nadeen Al-Abdullatif

Lexeme is a prototype for advertising network that connects Web sites that want to host advertisements with advertisers who want to run advertisements. Lexeme aims to implement better approaches for reaching and attracting target customers by integrating semantic Web technology and enables computers to know what particular ads mean, to know what particular Web sites are about, and to understand the relationships between them all.  Advertising networks’ reliance on only the keywords in the content results in displaying irrelevant and unappealing ads on the Web page. The Semantic-Based Advertising Networks moves beyond simple keywords by understanding all the words on a page, and how they relate to one another. In Lexeme, the description of ads and Web site content relies on the ontology that represents the conceptualization of the knowledge domain. Advertiser defines the concept that corresponds to the product or services to sell in their ad along with properties specifying the characteristics of the product and relationships with other concepts. The paper proposes a novel approach for matching ads with Web site content using semantic Web technology, illustrated by Lexeme prototype.


A Dynamic Error based fair scheduling using Two Layered Distributed Heap Sort Tree for a Computational Grid [ Full-Text ]

Archana V Mire

Grid Computing has emerged as an important new field focusing on resource sharing. In grid computing applications, resource management and job scheduling are the most crucial problems. In this paper, we propose a new algorithm for fair scheduling using distributed heap sort tree for grid resource management model considering both the resource management and job scheduling as coalition integrity. It aims at addressing the maximum computational power by organizing resources in heap sort tree manner & fairness issue by reducing the service time error. In our model, we use a carefully designed agent to obtain dynamic real time available computational ability of various nodes in the grid environment, so that new job could be assigned to the node that has the largest available computational power. The algorithm assigns each task to enough computational power to complete it within its deadline. The resources that each user gets are proportional to the user’s weight or a share. The weight or share of a user may be defined as the user’s contribution to the infrastructure or the price he is willing to pay for services. Scheduling of tasks is based on searching in a root of heap tree for an error called the Service time error which fairly distributes resource among users. Fairness is defined as the proportional allocation of resources to tasks as per their demand. All the We construct all grid computational resources into distributed two-layered Heap Sort Tree .This makes the system be more scalable, robust, fault-tolerant and high performance. By taking advantages of agents in constructing and reconstructing the two-layered Heap Sort Tree, this model is well fitted with the unpredictable changing grid environment.


Comparison of Promoter Sequences by Alignment of Motif Sequences [ Full-Text ]

Meera A., Lalitha Rangarajan, Poonam V Reddy and Arun Chandrashekar

In this paper we propose a method to compare promoter sequences by aligning the sequence of motifs present in the promoter sequence. Alignment is performed using dynamic programming.  Transcription Factors (TFs) from these sequences are extracted using ‘TF search’ tool. The resultant sequences of motifs are then aligned and match score obtained. As a case study, we have used promoter sequences of different mammals of the enzyme citrate synthase in central metabolic pathway extracted from the NCBI database. Results reveal high similarity in motif sequences of different organisms in the same chromosome.  Also some amount of similarity is present among motif sequences of different chromosomes of the same organism.


Regulating Response Time in an Autonomic Computing System: A Fuzzy Control approach [ Full-Text ]

Harish S. Venkatarama and Chandrasekaran Kandasamy

Ecommerce is an area where an Autonomic Computing system could be very effectively deployed.  Ecommerce has created demand for high quality information technology services and businesses are seeking quality of service guarantees from their service providers. These guarantees are expressed as part of service level agreements. Properly adjusting tuning parameters for enforcement of the service level agreement is time-consuming and skills-intensive. Moreover, in case of changes to the workload, the setting of the parameters may no longer be optimum. In an ecommerce system, where the workload changes frequently, there is a need to update the parameters at regular intervals. This paper describes an approach to automate the tuning of MaxClients parameter of Apache web server using a fuzzy controller based on the service level agreement and the current workload. This is an illustration of the self-optimizing characteristic of an autonomic computing system.


An Improved Multiple Faults Reassignment based Recovery in Cluster Computing [ Full-Text ]

Sanjay Bansal and Sanjeev Sharma

In case of multiple node failures performance becomes very low as compare to single node failure. Failures of    nodes in cluster computing can be tolerated by   multiple fault tolerant computing. Existing recovery schemes are efficient for single fault but not with multiple faults. Recovery scheme proposed in this paper having two phases; sequentially phase, concurrent phase. In sequentially phase, loads of all working nodes are uniformly and evenly distributed by proposed dynamic rank based and load distribution algorithm. In concurrent phase, loads of all failure nodes  as well as new job arrival are assigned equally to all available nodes by just finding the least loaded node among the several nodes by  failure nodes job allocation algorithm. Sequential and concurrent executions of algorithms improve the performance as well better resource utilization. Dynamic rank based algorithm for load redistribution works as a sequential restoration algorithm and reassignment algorithm for distribution of failure nodes to least loaded computing nodes works as a concurrent recovery reassignment algorithm. Since load is evenly and uniformly distributed among all available working nodes with less number of iterations, low iterative time and communication overheads hence performance is improved. Dynamic ranking algorithm is low overhead, high convergence algorithm for reassignment of tasks uniformly among all available nodes. Reassignments of failure nodes are done by a low overhead efficient failure job allocation algorithm. Test results to show effectiveness of the proposed scheme are presented.


Design of a LUT-Based Reversible Field Programmable Gate Array [ Full-Text ]

MD. Masbaul Alam Polash and Shamima Sultana

Reversible logic plays an important role in the fields of low power computation, cryptography, communications, digital Signal processing and the emerging field of quantum computing. On the other hand, Field Programmable Gate Arrays (FPGAs) architecture has a dramatic effect on the quality of the final device’s speed performance, area efficiency and power consumption. This paper presents a novel design of reversible architecture of FPGA based on look-up tables (LUT’s). We consider the generalize structure of the basic configurable logic block (CLB) and the I/O pad of Xilinx FPGA; it includes devices such as LUT’s, sequential elements (flip-flops), multiplexers and control circuitry. Each of the components which are required to design the reversible FPGA has been improved as compared to the existing ones in terms of number of gates, garbage outputs and quantum cost.


A Low Cost Automatic Destination Announcement System [ Full-Text ]

Ayob Johari, Mohd Syafiq Amari, Mohd Helmy Abd Wahab, Mohd Norzali Haji Mohd, M.Erdi Ayob,  M. Izwan Ayob, M. Afif Ayob and Noraihan Esa

This paper describes the development of automatic announcement system for train passenger based on radio-frequency identification (RFID) and programmable integrated circuit (PIC). It is developed to solve the inconsistent announcement of the current system. This system is able to announce automatically the next destination as the train is traveling to the next stop, hence improve the manual announcement system in the train. The RFID tag which acts as sensor is placed at designated location along the track while the RFID reader is at the train. A PIC that holds the programmed instructions is to interface between RFID and announcement system. The PIC is programmed using C language to make this project function smoothly. It receives destination data from RFID reader once the train moves and passes the RFID tag at the designated location.  The identified data is converted into a signal. The destination signal is then announced through the PA system (loud speaker) in the train.