Sunday, September 26, 2010

Cab assignment

A Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of adatabase. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.




Components of DBMS

DBMS Engine accepts logical request from the various other DBMS subsystems, converts them into physical equivalents, and actually accesses the database and data dictionary as they exist on a storage device.
Data Definition Subsystem helps user to create and maintain the data dictionary and define the structure of the files in a database.
Data Manipulation Subsystem helps user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interface between user and the information contained in a database. It allows user to specify its logical information requirements.
Application Generation Subsystem contains facilities to help users to develop transaction-intensive applications. It usually requires that user perform a detailed series of tasks to process a transaction. It facilitates easy-to-use data entry screens, programming languages, and interfaces.
Data Administration Subsystem helps users to manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.


Uses of Computers in Word Processing





Computers play a big role in our daily lives, including the way we work. By using quality word processing software, businesses and individuals can trim their costs without sacrificing quality. From high quality mail merge letters personalized to every recipient, to brochures and fliers that look like they came from a professional printing company, word processing software can be used in a number of ways. While large scale jobs consisting of thousands of brochures and letters can be outsourced economically, it is often more economical to do smaller jobs in house.






Form Letters that Don't Look Like Form Letters


Everyone gets form letters from time to time, and they usually go right from the mailbox to the trash can. The key to getting your form letter noticed and read is to make sure it doesn't look or sound like a form letter. Letters that are personalized to each individual on your list are more likely to be opened, read and remembered than a dull generic letter. Word processing software allows you to personalize, from a pre-built database, each letter with just a few clicks of the mouse. Since each field of the letter is filled in with information from your database, each letter will contain only the specific information you want to convey to each recipient. The letters can be printed in-house, a convenient and often economical option, or they can be printed to a file and given to a printing company. You will need to consider a number of factors, including speed, convenience and cost, when deciding whether to send your job to a printing company or do it yourself.







Brochures and Promotional Materials


Word processing software includes not just the ability to create letters and mailings, but brochures and fliers as well. Brochures, fliers and other promotional materials can be quite costly to produce, but business owners can save money by producing those items in-house. The major word processing software packages on the market all include templates you can use to create promotional materials, as well as the tools needed to customize them. When creating those brochures, however, individuals and business owners need to make sure they get permission to use any images or clip art that is not from a free source. Copyright laws must be respected. Once the finished brochures are created, they can be mass produced at a local printer--this is often more economical than printing thousands of brochures on your home or office printer.






More Than Just Text


Textual data is important, but sometimes simple text tells a true but dull story. That is why word processing software allows users to incorporate data from other applications, including spreadsheets, tables, pictures and other information. This allows users to dress up their documents for maximum impact. It also makes those documents much more useful to the people who receive them.

For instance, consider a boring letter from the sales department outlining their recent successes. Then consider that same letter with a graph showing sales growth and a picture of the company's satisfied new customers. Also, custom reports can make meetings more productive by making it easier for participants to find just the information they need.



COMPUTER APPLICATIONS IN INDUSTRY AND ENGINEERING
COMPUTER IN INDUSTRIES
The aim of Computers in Industry is to publish original, high-quality, application-oriented research papers that: Show new trends in and options for the use of Information and Communication Technology in industry; Link or integrate different technology fields in the broad area of computer-applications for industry; Link or integrate different application areas of ICT in industry. General topics covered include the following areas: The unique application of ICT in business processes such as design, engineering, manufacturing, purchasing, physical distribution, production management and supply chain management. This is the main thrust of the journal. It includes research in integration of business process support, such as in enterprise modelling, ERP, EDM. The industrial use of ICT in knowledge intensive fields such as quality control, logistics, engineering data management, and product documentation will certainly be considered. Demonstration of enabling capabilities of new or existing technologies such as hard real time systems, knowledge engineering, applied fuzzy logic, collaborative work systems, and intelligence agents are also welcomed. Papers solely focusing on ICT or manufacturing processes may be considered out of scope. A continuous quality policy, based on strict peer reviewing shall ensure that published articles are: - Technologically outstanding and front-end - Application-oriented with a generalised message - Representative for research at an international level.

COMPUTERS IN ENGINEERING
Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers and software tools in education and for accelerating the integration of computers into the engineering curriculum. The journal encourages articles that present: New software for engineering education New educational technologies such as interactive video and multimedia presentations Computer use in laboratories Visualization computer graphics video and I/O issues Computer-based engineering curricula Computer uses in classroom or independent study situations Use of commercial and government-owned software in education Engineering software development and funding opportunities Papers crossing boundaries between engineering disciplines are welcomed.
Posted by Niraj S at 7:15 AM 0 comments Email ThisBlogThis!Share to TwitterShare to FacebookShare to Google Buzz
COMPUTER APPLICATIONS IN DATA PROCESSING


DATA PROCESSING
Computer data processing is any process that uses a computer program to enter data and summarise, analyse or otherwise convert datainto usable information. The process may be automated and run on a computer. It involves recording, analysing, sorting, summarising, calculating, disseminating and storing data. Because data is most useful when well-presented and actually informative, data-processing systems are often referred to as information systems. Nevertheless, the terms are roughly synonymous, performing similar conversions; data-processing systems typically manipulate raw data into information, and likewise information systems typically take raw data as input to produce information as output.
Data processing may or may not be distinguished from data conversion, when the process is merely to convert data to another format, and does not involve any data manipulation.
DATA ANALYSIS
When the domain from which the data are harvested is a science or an engineering field, data processing and information systems are considered terms that are too broad and the more specialized term data analysis is typically used. This is a focus on the highly-specialized and highly-accurate algorithmic derivations and statistical calculations that are less often observed in the typical general business environment. In these contexts data analysis packages like DAP, gretl or PSPP are often used. This divergence of culture is exhibited in the typical numerical representations used in data processing versus numerical; data processing's measurements are typically represented byintegers or by fixed-point or binary-coded decimal representations of numbers whereas the majority of data analysis's measurements are often represented by floating-point representation of rational numbers.
PROCESSING

Basically,data is nothing but unorganised facts and which can be converted into useful information.This process of converting facts to information is Processing. Practically all naturally occurring processes can be viewed as examples of data processing systems where "observable" information in the form of pressure, light, etc. are converted by human observers into electrical signals in the nervous system as the senses we recognize as touch, sound, and vision. Even the interaction of non-living systems may be viewed in this way as rudimentaryinformation processing systems. Conventional usage of the terms data processing and information systems restricts their use to refer to the algorithmic derivations, logical deductions, and statistical calculations that recur perennially in general business environments, rather than in the more expansive sense of all conversions of real-world measurements into real-world information in, say, an organic biological system or even a scientific or engineering system.





ELEMENTS OF DATA PROCESSING




In order to be processed by a computer, data needs first be converted into a machine readable format. Once data is in digital format, various procedures can be applied on the data to get useful information. Data processing may involve various processes, including:




Data summarization
Data aggregation
Data validation
Data tabulation
Statistical analysis

DATA SUMMARIZATION
Data Summarization summarizes evaluational data included both primitive and derived data, in order to create a derived evaluational data that is general in nature.  Since the data in the data warehouse is of very high volume, there needs to be a mechanism in order to get only the relevant and meaningful information in a less messy format. Data summarization provides the capacity to give data consumers generalize view of disparate bulks of data.
Data summarization in very large multi-dimensional datasets as in the case of data warehouses is a very challenging work. This typically requires very intensive investigation to be done by IT experts, database administrators and programmers so that overall trends and important exceptions can be identified and dealt with technically. A computer, or several computer working together, can perform very exhaustive searches using highly sophisticated and complex algorithms to do the data summarization.

DATA AGGREGATION




In Data Aggregation, value is derived from the aggregation of two or more contributing data characteristics.  
Aggregation can be made from different data occurrences within the same data subject, business transactions and a de-normalized database and between the real world and detailed data resource design within the common data architecture.




Reporting and data analysis applications that work closely to tie together company data users and data warehouses need to overcome problem on database performance. Every single day, the amount data collected increases at exponential proportions. Along with the increase, the demands for more detailed reporting and analysis tools also increases.



In a competitive business environment, the areas that are given more focus to gain competitive edge over other companies include the need for timely financial reporting, real time disclosure so that the company can meet compliance regulations and accurate sales and marketing data so the company can grow a larger customer base and thus increase profitability.
Data aggregation helps company data warehouses try to piece together different kinds of data within the data warehouse so that they can have meaning that will be useful as statistical basis for company reporting and analysis.

DATA VALIDATION




In computer science, data validation is the process of ensuring that a program operates on clean, correct and useful data. It uses routines, often called "validation rules" or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of a data dictionary, or by the inclusion of explicit application programvalidation logic.
For business applications, data validation can be defined through declarative data integrity rules, or procedure-based business rules[1]. Data that does not conform to these rules must negatively affect business process execution. Therefore, data validation should start with business process definition and set of business rules within this process. Rules can be collected through the requirements capture exercise[2].
The simplest data validation verifies that the characters provided come from a valid set. For example, telephone numbers should include thedigits and possibly the characters +, -, (, and ) (plus, minus, and parentheses). A more sophisticated data validation routine would check to see the user had entered a valid country code, i.e., that the number of digits entered matched the convention for the country or area specified.
Incorrect data validation can lead to data corruption or a security vulnerability. Data validation checks that data are valid, sensible, reasonable, and secure before they are processed.




Allowed character checks
Batch totals
Cardinality Check
Check digits
Consistency Checks
Control totals
Cross-system Consistency Checks
Data type checks
File existence check
Format or picture check
Hash totals
Limit check
Logic check
Presence check
Range check
Referential Integrity
Spelling and grammar check
Uniqueness check

DATA TABULATION
Cross tabulation is the process of creating a contingency table from the multivariate frequency distribution of statistical variables. Heavily used in survey research, cross tabulations (or crosstabs for short) can be produced by a range of statistical packages, including some that are specialised for the task. Survey weights often need to be incorporated. Unweighted tables can be easily produced by some spreadsheetsand other business intelligence tools, where they are commonly known as pivot tables.





The following table lists the gender and the handedness for a sample population of 12 individuals:
Sample no. Gender Handedness
1 Female right-handed
2 Male left-handed
3 Male right-handed
4 Female right-handed
5 Female right-handed
6 Male right-handed
7 Male left-handed
8 Male right-handed
9 Female right-handed
10 Female left-handed
11 Male right-handed
12 Female right-handed
Cross-tabulation leads to the following contingency tables:
Right-handed Left-handed TOTALS
Males 4 2 6
Females 5 1 6
TOTALS 9 3 12

STATISTICAL ANALYSIS




Statistics is the science of data collection, organization, and interpretation[1]; the data is often numerical but may take other forms including relationships between entities. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys andexperiments.[2] The purpose is to obtain some overall understanding of group characteristics.
A statistician is someone who is particularly well versed in the ways of thinking necessary for the successful application of statistical analysis. Such people have often gained this experience through working in any of a wide number of fields. There is also a discipline calledmathematical statistics, which is concerned with the theoretical basis of the subject.
The word statistics can either be singular or plural.[3] When it refers to the discipline, "statistics" is singular, as in "Statistics is an art." When it refers to quantities (such as mean and median) calculated from a set of data,[4] statistics is plural, as in "These statistics are misleading."




Some well-known statistical tests and procedures are:
Analysis of variance (ANOVA)
Chi-square test
Correlation
Factor analysis
Mann–Whitney U
Mean square weighted deviation (MSWD)
Pearson product-moment correlation coefficient
Regression analysis
Spearman's rank correlation coefficient
Student's t-test
Time series analysis