Now showing items 1-20 of 75
Next PageAbstract: | Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost |
Description: | Computing Communication and Networking Technologies (ICCCNT), 2010 International Conference on |
URI: | http://dyuthi.cusat.ac.in/purl/3868 |
Files | Size |
---|---|
An adaptive clu ... reless sensor networks.pdf | (157.5Kb) |
Abstract: | This paper discusses the complexities involved in managing and monitoring the delivery of IT services in a multiparty outsourcing environment. The complexities identified are grouped into four categories and are tabulated. A discussion on an attempt to model a multiparty outsourcing scenario using UML is also presented and explained using an illustration. Such a model when supplemented by a performance evaluation tool can enable an organization to manage the provision of IT services in a multiparty outsourcing environment more effectively |
Description: | CSREA EEE |
URI: | http://dyuthi.cusat.ac.in/purl/3893 |
Files | Size |
---|---|
Analyzing and M ... tsourcing Environment..pdf | (318.9Kb) |
Abstract: | This paper discusses our research in developing a generalized and systematic method for anomaly detection. The key ideas are to represent normal program behaviour using system call frequencies and to incorporate probabilistic techniques for classification to detect anomalies and intrusions. Using experiments on the sendmail system call data, we demonstrate that concise and accurate classifiers can be constructed to detect anomalies. An overview of the approach that we have implemented is provided. |
Description: | JOURNAL OF SOFTWARE, VOL. 2, NO. 6, DECEMBER 2007 |
URI: | http://dyuthi.cusat.ac.in/purl/3866 |
Files | Size |
---|---|
Anomaly Detection Using System Call.pdf | (312.1Kb) |
Abstract: | n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES. |
Description: | Division of Electronics Engineering, School of Engineering, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3089 |
Files | Size |
---|---|
Dyuthi-T1063.pdf | (1.722Mb) |
Abstract: | This paper presents a new approach to implement Reed-Muller Universal Logic Module (RM-ULM) networks with reduced delay and hardware for synthesizing logic functions given in Reed-Muller (RM) form. Replication of single control line RM-ULM is used as the only design unit for defining any logic function. An algorithm is proposed that does exhaustive branching to reduce the number of levels and modules required to implement any logic function in RM form. This approach attains a reduction in delay, and power over other implementations of functions having large number of variables. |
Description: | NORCHIP Conference, 2005. 23rd |
URI: | http://dyuthi.cusat.ac.in/purl/3883 |
Files | Size |
---|---|
Automated synth ... logic module networks.pdf | (2.066Mb) |
Abstract: | Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. Moreover it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases |
Description: | (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 5, No. 7, 2014 |
URI: | http://dyuthi.cusat.ac.in/purl/4580 |
Files | Size |
---|---|
Automatic Optic ... om Color Fundus Images.pdf | (602.5Kb) |
Abstract: | Data caching is an important technique in mobile computing environments for improving data availability and access latencies particularly because these computing environments are characterized by narrow bandwidth wireless links and frequent disconnections. Cache replacement policy plays a vital role to improve the performance in a cached mobile environment, since the amount of data stored in a client cache is small. In this paper we reviewed some of the well known cache replacement policies proposed for mobile data caches. We made a comparison between these policies after classifying them based on the criteria used for evicting documents. In addition, this paper suggests some alternative techniques for cache replacement |
Description: | International Journal of Ad hoc, Sensor & Ubiquitous Computing (IJASUC) Vol.3, No.4, August 2012 |
URI: | http://dyuthi.cusat.ac.in/purl/3870 |
Files | Size |
---|---|
CACHE REPLACEMENT STRATEGIES FOR MOBILE.pdf | (112.5Kb) |
Abstract: | This paper proposes a content based image retrieval (CBIR) system using the local colour and texture features of selected image sub-blocks and global colour and shape features of the image. The image sub-blocks are roughly identified by segmenting the image into partitions of different configuration, finding the edge density in each partition using edge thresholding, morphological dilation and finding the corner density in each partition. The colour and texture features of the identified regions are computed from the histograms of the quantized HSV colour space and Gray Level Co- occurrence Matrix (GLCM) respectively. A combined colour and texture feature vector is computed for each region. The shape features are computed from the Edge Histogram Descriptor (EHD). Euclidean distance measure is used for computing the distance between the features of the query and target image. Experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods |
Description: | International Journal of Advanced Science and Technology Vol. 48, November, 2012 |
URI: | http://dyuthi.cusat.ac.in/purl/3877 |
Files | Size |
---|---|
CBIR Using Loca ... es of Image Sub-blocks.pdf | (786.3Kb) |
Abstract: | In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay |
Description: | IJCSNS International Journal of Computer Science and Network Security, VOL.10 No.7, July 2010 |
URI: | http://dyuthi.cusat.ac.in/purl/3907 |
Files | Size |
---|---|
Channel Adaptiv ... ome Performance Issues.pdf | (281.1Kb) |
Abstract: | Code clones are portions of source code which are similar to the original program code. The presence of code clones is considered as a bad feature of software as the maintenance of software becomes difficult due to the presence of code clones. Methods for code clone detection have gained immense significance in the last few years as they play a significant role in engineering applications such as analysis of program code, program understanding, plagiarism detection, error detection, code compaction and many more similar tasks. Despite of all these facts, several features of code clones if properly utilized can make software development process easier. In this work, we have pointed out such a feature of code clones which highlight the relevance of code clones in test sequence identification. Here program slicing is used in code clone detection. In addition, a classification of code clones is presented and the benefit of using program slicing in code clone detection is also mentioned in this work. |
Description: | Information and Communication Technologies (WICT), 2011 World Congress on |
URI: | http://dyuthi.cusat.ac.in/purl/3889 |
Files | Size |
---|---|
Code clones in ... equence identification.pdf | (346.2Kb) |
Abstract: | Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier. |
Description: | Computer Science & Information Technology (CS & IT) |
URI: | http://dyuthi.cusat.ac.in/purl/3905 |
Files | Size |
---|---|
COMBINED FEATUR ... FOR SPEECH RECOGNITION.pdf | (190.6Kb) |
Abstract: | Speech is a natural mode of communication for people and speech recognition is an intensive area of research due to its versatile applications. This paper presents a comparative study of various feature extraction methods based on wavelets for recognizing isolated spoken words. Isolated words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. This work includes two speech recognition methods. First one is a hybrid approach with Discrete Wavelet Transforms and Artificial Neural Networks and the second method uses a combination of Wavelet Packet Decomposition and Artificial Neural Networks. Features are extracted by using Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Training, testing and pattern recognition are performed using Artificial Neural Networks (ANN). The proposed method is implemented for 50 speakers uttering 20 isolated words each. The experimental results obtained show the efficiency of these techniques in recognizing speech |
URI: | http://dyuthi.cusat.ac.in/purl/3912 |
Files | Size |
---|---|
A Comparative S ... Isolated Spoken Words.pdf | (342.6Kb) |
Abstract: | Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example |
Description: | International Journal of Software Engineering and Its Applications Vol. 6, No. 1, January, 2012 |
URI: | http://dyuthi.cusat.ac.in/purl/4668 |
Files | Size |
---|---|
A Compiler Inte ... ry Embedded Processors.pdf | (577.3Kb) |
Abstract: | This paper proposes a region based image retrieval system using the local colour and texture features of image sub regions. The regions of interest (ROI) are roughly identified by segmenting the image into fixed partitions, finding the edge map and applying morphological dilation. The colour and texture features of the ROIs are computed from the histograms of the quantized HSV colour space and Gray Level co- occurrence matrix (GLCM) respectively. Each ROI of the query image is compared with same number of ROIs of the target image that are arranged in the descending order of white pixel density in the regions, using Euclidean distance measure for similarity computation. Preliminary experimental results show that the proposed method provides better retrieving result than retrieval using some of the existing methods. |
Description: | Journal of Image and Graphics, Volume 1, No.1, March, 2013 |
URI: | http://dyuthi.cusat.ac.in/purl/3884 |
Files | Size |
---|---|
Content Based I ... ed Regions of Interest.pdf | (1.651Mb) |
Abstract: | Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data caching is one of the techniques widely used in wired and wireless networks to improve data access efficiency. In this paper we explore the possibility of a cooperative caching approach to enhance data access efficiency in mobile cloud computing. The proposed approach is based on cloudlets, one of the architecture designed for mobile cloud computing. |
Description: | Global Journal of Computer Science and Technology Network, Web & Security Volume 13 Issue 8 Version 1.0 Year 2013 |
URI: | http://dyuthi.cusat.ac.in/purl/3899 |
Files | Size |
---|---|
Cooperative Cac ... Mobile Cloud Computing.pdf | (650.6Kb) |
Abstract: | Data caching can remarkably improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. The cache placement problem minimizes total data access cost in ad hoc networks with multiple data items. The ad hoc networks are multi hop networks without a central base station and are resource constrained in terms of channel bandwidth and battery power. By data caching the communication cost can be reduced in terms of bandwidth as well as battery energy. As the network node has limited memory the problem of cache placement is a vital issue. This paper attempts to study the existing cooperative caching techniques and their suitability in mobile ad hoc networks. |
Description: | Data Science & Engineering (ICDSE), 2012 International Conference on |
URI: | http://dyuthi.cusat.ac.in/purl/3871 |
Files | Size |
---|---|
Cooperative Caching Techniques for Mobile.pdf | (146.8Kb) |
Abstract: | Decimal multiplication is an integral part offinancial, commercial, and internet-based computations. The basic building block of a decimal multiplier is a single digit multiplier. It accepts two Binary Coded Decimal (BCD) inputs and gives a product in the range [0, 81] represented by two BCD digits. A novel design for single digit decimal multiplication that reduces the critical path delay and area is proposed in this research. Out of the possible 256 combinations for the 8-bit input, only hundred combinations are valid BCD inputs. In the hundred valid combinations only four combinations require 4 x 4 multiplication, combinations need x multiplication, and the remaining combinations use either x or x 3 multiplication. The proposed design makes use of this property. This design leads to more regular VLSI implementation, and does not require special registers for storing easy multiples. This is a fully parallel multiplier utilizing only combinational logic, and is extended to a Hex/Decimal multiplier that gives either a decimal output or a binary output. The accumulation ofpartial products generated using single digit multipliers is done by an array of multi-operand BCD adders for an (n-digit x n-digit) multiplication. |
Description: | Electronic Design, 2008. ICED 2008. International Conference on |
URI: | http://dyuthi.cusat.ac.in/purl/3860 |
Files | Size |
---|---|
Decimal Multipl ... compact BCD Multiplier.pdf | (1.493Mb) |
Abstract: | The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. A multi-standard design often involves extensive system level analysis and architectural partitioning, typically requiring extensive calculations. In this research, a decimation filter design tool for wireless communication standards consisting of GSM, WCDMA, WLANa, WLANb, WLANg and WiMAX is developed in MATLAB® using GUIDE environment for visual analysis. The user can select a required wireless communication standard, and obtain the corresponding multistage decimation filter implementation using this toolbox. The toolbox helps the user or design engineer to perform a quick design and analysis of decimation filter for multiple standards without doing extensive calculation of the underlying methods. |
Description: | World Academy of Science, Engineering and Technology Vol:3 2009-11-22 |
URI: | http://dyuthi.cusat.ac.in/purl/3890 |
Files | Size |
---|---|
Decimation Filt ... nsceivers using MATLAB.pdf | (488.1Kb) |
Abstract: | This work is aimed at building an adaptable frame-based system for processing Dravidian languages. There are about 17 languages in this family and they are spoken by the people of South India.Karaka relations are one of the most important features of Indian languages. They are the semabtuco-syntactic relations between verbs and other related constituents in a sentence. The karaka relations and surface case endings are analyzed for meaning extraction. This approach is comparable with the borad class of case based grammars.The efficiency of this approach is put into test in two applications. One is machine translation and the other is a natural language interface (NLI) for information retrieval from databases. The system mainly consists of a morphological analyzer, local word grouper, a parser for the source language and a sentence generator for the target language. This work make contributios like, it gives an elegant account of the relation between vibhakthi and karaka roles in Dravidian languages. This mapping is elegant and compact. The same basic thing also explains simple and complex sentence in these languages. This suggests that the solution is not just ad hoc but has a deeper underlying unity. This methodology could be extended to other free word order languages. Since the frame designed for meaning representation is general, they are adaptable to other languages coming in this group and to other applications. |
URI: | http://dyuthi.cusat.ac.in/purl/978 |
Files | Size |
---|---|
Dyuthi-T0048.pdf | (1.404Mb) |
Abstract: | The present study shows design and development of a performance evaluation prototype for IT organizations in the context of outsourcing. The main objective of this research is to help an IT organization in the context of outsourcing to realize its current standing, so it can take corrective steps where ever necessary and strive for continuous improvement. Service level management (SLM) process plays a crucial role in controlling the quality provision for IT service. Out sourcing is the process of entrusting the responsibility of providing certain goods and services to an external party. We have tried to identify as many as twenty complexities and categorized in to four headings. Complexities associated with contracts and SLAs,SLM process,SLM organization and complexities due to intrinsic characteristics. In this study it is possible to measure the quality of the performance of an IT organization in an outsourcing environment effectively |
URI: | http://dyuthi.cusat.ac.in/purl/999 |
Files | Size |
---|---|
Dyuthi-T0334.pdf | (6.423Mb) |
Now showing items 1-20 of 75
Next PageDyuthi Digital Repository Copyright © 2007-2011 Cochin University of Science and Technology. Items in Dyuthi are protected by copyright, with all rights reserved, unless otherwise indicated.