Information technology has become quite a vital aspect of how we conduct business, as well as our personal lives; it is also crucial in today’s market and industries. It is simply a must in every business plan and company. From the largest company’s and conglomerates maintaining crucial information in massive databases, to the smallest businesses and individuals at home. This is something best looked at in terms of the business world and the current developments happening there.
Many of the world company’s today simply cannot do without efficient communication. From emails to overseas calling, to the warehouse, to the cubicle at the end of the hall, the internet is the primary source of information transfer. Today the Internet has evolved to a point that without it companies would be rendered null, leaving a company without chat systems, vital internal and external communications, and the ability to handle and process large quantities of information.
Information technology is also a vital aspect of many inventory management systems. Companies can now know with the click of a button the entire organization's inventory, sales, accounts receivable and payable, and at the same time forecast sales for the coming fiscal year.
Typically these systems are best when running tandem to a point-of-sale (POS) system. This has the benefit of keeping track of inventory as the inventory is used. Of course, this is not the only way that IT is important but only one aspect.
Probably the most powerful aspect of IT is the ability to manage data on a large scale. In the past, entire warehouses were filled to overflowing with paperwork and information stored on paper. This entire warehouse can now fit into a few desktop computers under someone’s desk. Those days are over and it seems the mail system will eventually follow suit. But regardless of one geographic location this stored information can be used, updated, downloaded or uploaded from anywhere on the globe.
The storing of large amounts of data is only one of the powerful abilities of information technology to handle such an incredible workload while covering an immense span of use. The storing of this data is useless if one cannot access and maintain it on a regular basis. Most companies make use of Management Information Systems (MIS) for the execution of this concept. Companies can use this ability to maintain and manage the day to day operations with precision and accuracy and at a pace able to keep up with today’s market. These MIS systems enable any company to send and track big data analytics, costs and capital, as well as productivity. These systems have the added ability to track information over long periods of time helping to maximize profitability and productivity.
Information technology is central to customer Relationship Management (CRM). These forms of systems are changing the way most companies are able to communicate with their customers. These CRM systems are implemented to capture relations with customers such as phone calls, emails, transactional histories, and more. If someone calls the AT&T representative needing some personal information the entire transaction is recorded and saved for possible use later on. This is a vital aspect of maintaining good customer service, as a representative can now look back on a customer’s history to retrieve and learn vital details of a transaction.
But IT is not just a part of the business world. IT has reached into virtually every home on the planet. From cell phones to computers, to how we receive our television signal we are reliant on the information it provides 24 hours a day.
All tests and created in a similar pattern. There are fundamental steps to the creation of questions on any test or questionnaire, and the process can be an incredibly detailed undertaking.
Research has demonstrated the importance of test question construction. Students were found to provide varying answers based directly on how the question was worded (Caldwell, David J, and Adam Pate 1-5). However, we don’t need research to tell us that much of the test question formulation is based on semantics and phrases and sentences are worded. This is something every person deals with every day any time they talk to someone; how we word things is crucial to the conveyance of meaning as it is intended.
According to the University of Indiana (1) tests are a summative assessment. They measure performance on a given task which usually recalls or reasoning an outcome from another. There are a few general forms of tests: objective or “closed-answer, essay tests, and multiple-choice tests.
There is no specific number of steps involved in forming good test questions but generally speaking one would follow a series of steps (DuVerneay). The first step is to always rely on scientific methodology. Next, ask the right questions by generating a hypothesis. This is a crucial step specifically because without a general understanding of the test's planned parameters and constructs it will be difficult to demonstrate the internal construct validity. There are numerous methods for determining test question validity, such as the establishment of predictive or concurrent validity. Validity is summarily the most crucial aspect of test development: does the test truly measure what I want it to measure. Following this is to consider the testing length; plan how long the test should take a respondent, this will help to formulate and plan the number of questions. Avoid unnatural test flow; the structure of the test is highly relevant and important. One idea to maintain while constructing test questions is to put harder questions to the end of the test. User perceptions have demonstrated over time that respondents typically do poorer when they perceive the test and hard form the start. Be considerate of leading language; leading is a way of “planting a seed” to certain thoughts. Leading is often responsible for poor test performance. And especially relevant to Internet testing and questionnaires is to ensure that you are in fact testing the right people. One would not distribute a high school math exam to a 500 level master’s student etc. The final consideration for question and test construction centers on the use of pilot testing, a method of determining faulty and or unnecessary questions that lower a test internal validity.
Research indicates that using Internet methods for the collection of data are at least as diverse as many of the samples already used in psychological research. Internet samples are not highly representative or even random samples for that matter of the general population, but neither are traditional samples in psychology. However, when accounted for, representative measures can be accounted for and included with the test/questionnaire regimen. More so large sample sizes which are typically a selling point of internet questionnaires mean that even small proportions of participants (e.g. Native American) are represented by bulky unconditional numbers (Gosling, Vazire, Srivastava and John 102).
Further analyses suggest that data provided by Internet methods are at least as good as the quality as those provided by traditional paper-and-pencil methods. This is demonstrated with findings that Web-questionnaire results oversimplify across appearance formats, do not appear contaminated with counterfeit data or replicated responders, and are so far consistent with results from traditional methods. Generally, data collected via Internet methods are not as flawed as once typically believed.
There are various preconceptions about internet-based questionnaires to be addressed. The first is that internet samples are not sufficiently diverse. Research has demonstrated that web questionnaires are at least as good as the feasible alternatives presently used. Krantz and Dalal (35) demonstrated that though Internet samples are not usually fully representative of a population, they are more diverse than samples published in highly selective psychology journals. They showed that Internet samples are more representative than samples acquired via traditional methods with respect to gender, socioeconomic status, geographic location, and age.
The second is that Internet Samples Are Unusually Maladjusted. Internet users have typically been depicted as socially secluded computer geeks or social-misfits. Yet there is little evidence to support this claim. A third preconception is that Internet findings do not generalize across presentation formats. Little is yet known about the impact of dissimilar administration formats in conventional research. When administering a questionnaire, one is free to choose between Scranton forms and paper-and-pencil tests, or administering a questionnaire to individuals or to groups. Generally, there is little concern for the differences among formatting. Despite presentation deficiencies in regards to traditional methods of testing, they remain a concern to researchers (Bowker & Dillman 1; Dillman, Tortora, et al. 1; Dillman, Tortora, Conradt, et al. 5).
Anyone who has taken questionnaires on the Web knows that they come in a wide variety of styles, with some simple to more serious motives, such as gaining insight into one’s own behavior, whereas others seem to offer little more than an amusing diversion. Furthermore, software and hardware differences among participants mean that not every participant will see the exact same presentation of a questionnaire. Some research with probabilistic sampling have found that formatting effects can influence one's response rate (Bowker & Dillman, 2000; Dillman, Tortora, Conradt, et al., 1998), as of yet no evidence exists that they affect the content of people’s responses. Of course, even small differences in appearance format can have somber negative consequences for various experiments. But for most questionnaire research, presentation effects do not seem to risk the quality of the data.
A fourth preconception is that internet participants are not sufficiently motivated (to take the study seriously and respond meaningfully). There are numerous ways that this preconception is far from the truth. For one there is evidence to suggest that participants engage in less socially desirable responding and survey ‘satisficing’ when responding to a Web questionnaire than to a traditional paper questionnaire (Kiesler and Sproull 402). Secondly, Web questionnaires supply an exceptional benefit for motivating participants to respond seriously: alluring to people’s craving by providing immediate feedback. Participants are provoked to answer sincerely to obtain precise and accurate feedback about their personality. This advantage is made possible by the automated data entry and scoring permitted via Web questionnaires. These features also save time and money with regards to recruitment and data entry.
A fifth preconception is anonymity provided by Web questionnaires compromises the integrity of questionnaire data. Traditional methods take steps to ensure participants’ confidentiality though few can claim to provide total anonymity. When completing a questionnaire using traditional methods, participants typically do this in a controlled setting. They hand in their completed questionnaire to the administrator and have their data entered by hand all designed to decrease the anonymity of their responses. In contrast, Web questionnaires allow participants to complete the questionnaire on their own in whatever setting they feel most comfortable in. They can do this without ever seeing or communicating with a testing administrator. This eliminates the need for data entry in a traditional sense. These characteristics unique to Web-based questionnaires allow one to address questions that would be tricky or impractical to address with traditional methods. As an example, participants often feel much more comfortable disclosing personal information in a Web questionnaire than in a less anonymous setting such as an office or other controlled location (Levine, Ancill, and Roberts 216; Locke & Gilbert 255). The reporting of stigmatized health, drug-related, and sexual behaviors has been demonstrated to increase with greater anonymity (Turner et al. 867).
The final preconception is that Internet findings are not consistent with findings from traditional methods. Again there is a good deal of research demonstrating the lack of truth to this statement. Evidence to support consistencies across both methods is well demonstrated with such constructs as self-monitoring, reaction-time studies, and self-esteem.
Why are web-based questionnaires better than paper-based questionnaires in terms of availability and usability? As certain aspects of this have already been discussed previously I will not go into great detail. However, there are a few more items mention. Zazelenchuk (1605) found a significant correlation between user satisfaction and users' effectiveness. This means that the more someone understands a web-based application and can comprehend without a significant level of confusion, the greater their satisfaction of using that particular application in the future. In this case, we refer to web-based questionnaires and surveys (Zazelenchuk 1605).
The readiness and availability make web based questionnaires highly more versatile than traditional methods. One, online methods can be regulated without the need for an administrator's presence, freeing him or her to work on other tasks. Two, the online form can be taken at any time by the respondent. Three, web based applications provide an interactive aesthetic that traditional methods simply can’t. The readiness and ability to change the survey and or update specific aspects remain vastly superior to paper and pen methods. Four, over the last few decades researchers, have experienced a large drop in response rates within a traditional context (van Gelder, Bretveld and Roeleveld 2010). The research also claims another drawback to the traditional method is the high cost associated with large study populations making web based questionnaires an attractive and viable alternative. But a major drawback to Internet based testing can be a lack of control. Researchers and administrators cannot control the testing environment of respondents.
Current developments in Web-based questionnaires look highly promising as a way of gathering information through the use of tests and questionnaires. Research indicates that Web-based questionnaires, when cautiously designed, can adequately be used in certain populations in developed countries. Such populations as college students and virtually anyone with computer and Internet access can participate in the questionnaires. The near future stands to experience increased response rates as the availability and extended access of it are rapidly expanding (Akl, Maroun, and Klocke 2013).
In summary, web-based questionnaires have only been used for a brief time now over the last decade, and development was slow to start. As technology and Internet access increase over the coming years, Web-based questionnaires have a viable future. It should also be noted that no method of data collection is perfect whether traditional or via the Internet. Hypothetically speaking, Web-based questionnaires are fully able to compete with traditional modes of data collection and should be considered as a balancing alternative
Recent years have seen an explosion in Web-based questionnaires. This large increase in demand for these applications gives way for the creation of suitable launching platforms. Websites such as Survey Monkey, QuestionPro, and Formstack are dedicated platforms for the creation, set-up and implementation of Web-based surveys and questionnaires. The last few years have also seen the development of specific apps dedicated to questionnaires and surveys such as the iPad Questionnaire App. These are only a few to be sure, but the number of sites is growing at a substantial rate.
It goes without saying that certain coding languages are a necessity such as HTML (HyperText Markup Language) and CSS (Cascading Style Sheet). These are the basic building blocks for a Web page and typically cannot be overlooked. HTML, the foundational building block and the cornerstone of a page, is a necessary component and has come to long ways over recent years in terms of functionality and ease of use. CSS, the aesthetic component, was at one point the simplest and easiest way to demonstrate a webpage’s overall presentation and look. It has a few advantages such as it is easy to learn, it’s ease of use makes it highly advantageous to a website's overall look, and it appends to a sites file hierarchy quite easily. However, while CSS helps the presentation of a page, it lacks functionality as to a big website's interactivity. While HTML has the ability to build a page it has no presentational abilities; other than the layout it can only provide the address to other file amendments containing such files as CSS or JavaScript. Of course, currently, we have many different Web-based programming languages such as JavaScript and jQuery and numerous other languages dedicated to providing the functionality to websites' overall experience. CSS has the ability to deliver a website's look and presentation but it has the disadvantage of being unable to build upon site formatting or the DOM. As such a need for website interactivity was needed, leading to a number of languages specifically formulated to provide webpage interactivity. In real meaning, these are languages that perform task-specific functions such as data delivery (such as PHP which can deliver an email when the user submits information on a webpage form), processing tasks, timing and date functions, advertising and much more.
Web languages are broken into two categories: dynamic and compiled. Dynamic languages such as PHP, Python, and Ruby to name a few are generally referred to as server-side languages. They are the messengers and liaison between a website and host.
PHP is probably the most highly used server-side programming language. Many platforms are built on PHP such as WordPress, Joomala, and Magento. Over the last few years, PHP has become much more object-orientated giving it the advantage of easy use concerning the applications built on it. It has another advantage of being relatively easy to learn and many programmers are quite proficient with it. A drawback of PHP is the high cost of maintenance due to a lack of automated tests as well as the recent inclusion of object-orientated constructs (Top 5 Web Application Languages 1-1).
Another language is Python, which is quite a popular language and was created for the design and construction of web-based applications. It remains dynamic, providing server-side functionality, and has a strong user community. A negative aspect of Python is the learning curve associated with its use. Python is one of the more difficult languages to learn and use.
Much like Python is Ruby. Ruby is well known for its framework, Ruby on Rails. Similar to Python because of its object-orientation, Ruby is very useful for the creation of Domain-Specific Languages. This makes Ruby highly extensible to accomplishing many numbers of tasks. One of the major drawbacks to Ruby is once again the complexity and learning curve associated with using it (Top 5 Web Application Languages 1-1).
Compiled languages have safety in the compilation of their scripting, and they commonly perform better than dynamic range programming languages. Java (not to be confused with JavaScript) is a compiled object-orientated language based in C++ programming (Top 5 Web Application Languages 1-1). And though it has stagnated a bit in the past, it has some of the most widely used support networks. Java provides a large array of functionality for the building Web-based applications including but not limited to testing, data management and environments. Though the coding is easily maintained the major drawback (if it is one) is the complexity of use. Java is not a simple language.
C++ and VB.Net are part of the Microsoft .Net Framework. This is a general programming framework supported by web applications within the ASP.NET framework. While these don’t yet have the level of support that languages like Ruby and PHP have, they are getting better. An advantage of .Net as a compiled language is its dynamic operation which provides for highly useful programming experience. .Net is gaining ground with both the number of users and support involved with its use. It also has the benefit of being a free open-source (as are the rest) advantage and once again the learning curve is highly difficult. C++ is perhaps the most difficult programming language to learn. .Net has the disadvantage of compatibility issues with Linux (Top 5 Web Application Languages 1-1).
Top-down and bottom-up are both strategies for processing information and used as a methodology in a variety of fields including software testing and development. In practice, it is a style of thinking about a particular problem or method of organizing (Michael Jackson 2012). Top-down approaches break apart systems to gain insights into its components and methods of operation. Bottom down generally look at individual parts and form a working model from the ground up. In terms of software design methods, almost all programming is a combination of both. With object-oriented programming (OOP), the main goal or problem is subdivided with the identification of domain objects (top-down), these are then redefined into a final software program (bottom-up).
Test-driven development (TDD) is a development process relying on repetition; an exceptionally short development. In software development the programmer(s) makes a series of the automated test program, checking each as they go. A minimum amount of code and a properly functioning/working program is the goal both needed and expected. First, the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass the test, and finally re-factors the new code to suitable standards (Denne, Mark, and Cleland-Huang, 39-41).
User-centered design (UCD) employs a methodology that employs the end results and needs at all stages of the development cycle (Begoña1, Urretavizcaya, and Fernández-Castro 2268). This is basically a customer service or buyer orientated process by which the needs of the end-user are of central importance during the development process. This is method is quite common in software development and has four main development cycles: analysis, design, implementation, and deployment.
Works Cited
Akl, E., N. Maroun, and R. Klocke. “Electronic Mail was not Better than Postal Mail for Surveying Residents and Faculty.” J Clin Epidemiol 58 (2005):425-429. Web. 8 Dec. 2013.
Bowker, D. and Dillman, D. A. “An Experimental Evaluation of Left and Right Oriented Screens for Web Questionnaires.” WebSM, Faculty of Social Sciences, University of Ljubljana (2000): 1-19. Web. 5 Dec. 2013.
Buchanan, T. & J. Smith. “Using the Internet for Psychological Research: Personality Testing on the World Wide Web.” British Journal of Psychology 90 (1999): 125–144. Web. 5 Dec. 2013.
Caldwell, David J, and Adam Pate. “Effects of Question Formats on Student and Item Performance.” American Journal of Pharmaceutical Education 77.4(2013): 1-5. Web. 5 Dec. 2013.
Denne, Mark, and Cleland-Huang, Jane. “The Incremental Funding Method: Data- Driven Software Development.” IEEE Software 21.3 (2004): 39-47. Web. 7 Dec. 2013.
Dillman, Don, Robert Tortora, John Conradt, & Dennis Bowker. “Influence of Plain vs. Fancy Design on Response Rates for Web Surveys.” WebSM, Faculty of Social Sciences, University of Ljubljana (1998): 1-6. Web. 5 Dec. 2013.
DuVerneay, Jessica . “9 Steps for Creating the Perfect User Test”. UserTesting.com. n.p. 2 May 2013. Web. 7 Dec. 2013.
Foxx, Jez, Craig Murray, and Anna Warm. “Conducting Research using Web-based Questionnaires: Practical, Methodological, and Ethical Considerations.” International Journal of Social Research Methodology 6.2 (2003): 167-180. Web. 7 Dec. 2013.
Gosling, Samuel, Simine Vazire, Sanjay Srivastava, and Oliver John P. “Should We Trust Web- Based Studies? A Comparative Analysis of Six Preconceptions About Internet Questionnaires.” American Psychologist 59.2 (2004) : 93-104. Web. 5 Dec. 2013.
K. McGraw, M. Tew, & J. Williams. (2000). “The Integrity of Web-delivered Experiments: Can you Trust the Data?” Psychological Science, 11 (2000): 502–506. Web. 5 Dec. 2013.
Krantz, J. and Dalal, R..” Validity of Web-based psychological research.” Psychological Experiments on the Internet 35–60. n.d. Web. 5 Dec. 2013.
Kiesler, S., and Lou Sproull. “Response Effects in the Electronic Survey.” Public Opinion Quarterly 50 (1986): 402–413. Web. 5 Dec. 2013.
Levine, S., R. Ancill, and A. Roberts. “Assessment of Suicide Risk by Computer-delivered Self- Rating Questionnaire: Preliminary Findings.” Acta Psychiatrica Scandinavica, 80 1989): 216–220. Web. 5 Dec. 2013.
R. Robins, K. Trzesniewski, J. Tracy, S. Gosling, and J. Potter. “Global Self-esteem Across the Life Span.” Psychology and Aging, 17(2002): 423–434. Web. 5 Dec. 2013.
“Test Construction.” Teaching and Learning. Indiana University Bloomington, n.p. 6 Sep. 2012. Web. 7 Dec. 2013.
Locke, S., and B. Gilbert. “ Method Of Psychological Assessment, Self-Disclosure, and Experiential Differences: A Study of Computer, Questionnaire, and Interview Assessment Formats.” Journal of Social Behavior & Personality 10( 1995): 255–263. Web. 5 Dec. 2013.
Losada, Begoña1, Maite Urretavizcaya, and Isabel Fernández-Castro. “A Guide to Agile Development of Interactive Software with a “User Objectives”-Driven Methodology.” Science of Computer Programming 78.11 (2013): 2268-2281. Web. 7 Dec. 2013.
MacKechnie, Chris. “Information Technology & Its Role in the Modern Organization.” Chron. Hearst Communications, Inc. (n.d.) Web. 5 Dec. 2013.
Michael Jackson. “Aspects of Abstraction in Software Development.” Software & Systems Modeling 11.4 (2012): 495-511. Web. 7 Dec. 2013.
"Top-Down Design (Introduction to Statistical Computing)". Masi.cscs.lsa.umich.edu.19 Sep. 2011. Web. 7 Dec. 2013.
Turner, C., L. Ku, S. Rogers. M. Lindberg, J. Pleck and F. Stonenstein. “Adolescent Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology.” Science 280 (1998): 867–873. Web. 5 Dec. 2013.
van Gelder, Marleen, Reini Bretveld and Nel Roeleveld. “Web-based Questionnaires: The Future in Epidemiology?” American Journal of Epidemiology 172.11 (2010): 1292-1298. Web. 10 Dec. 2013.
Zazelenchuk, Todd. “Measuring Satisfaction in Usability Tests: A Comparison of Questionnaire Administration Methods and an Investigation into Users' Rationales for Satisfaction.” Dissertation Abstracts International Section A: Humanities and Social Sciences 63 (2002): 1605. Web. 5 Dec. 2013
Capital Punishment and Vigilantism: A Historical Comparison
Pancreatic Cancer in the United States
The Long-term Effects of Environmental Toxicity
Audism: Occurrences within the Deaf Community
DSS Models in the Airline Industry
The Porter Diamond: A Study of the Silicon Valley
The Studied Microeconomics of Converting Farmland from Conventional to Organic Production
© 2024 WRITERTOOLS