android mcq
Q 1 - What is Android?
Q 2 - What is DDMS in android?
Q 3 - What is APK in android?
Q4-
Which is key file
A)androidmanifest.xml B)APK C).exe D)rar
Q5-
OHA
A)open
handset alliance B)Open Head Admin C)Open Hover All D)None of this
Q6. What operating system is used as the base of the Android stack?
A)Linux B)Apple C)Java D)Windows
Q7. What is contained within the manifest xml
file?
A) The permissions the app requires B)source code C)string values D)none of
this
Q8. What is contained within the Layout xml
file?
A- The code which is compiled to run the app
B- The strings used in the app
C- The permissions required by the app
D- Orientations and layouts that specify what the display looks like.
Q9.Android is Open source ?
A)Yes B)No C)Cannot determine
Q10.Android, Inc. was founded in Palo Alto, California in October ____
by Andy Rubin
A)2003 B)2012 C)2014 D)2016
Q11.Android version C for
Q12.Android version N for
Q13.Witch IDE require for android development
A)Xcode B)Notepad C)Eclipse D)Turbo c
Q14.Base of android development
A).java B).c C).cpp D).cs
Q15 witch method call when application close ?
A-
Destroy B-
Onstart C- OnResume D- OnPause
Q16 find out odd one
A)Chrome B)Firefox C)Safari D)Call recorder
Q17 can we set application icon without coding ?
A-Yes B-No
Q18 android license require for development ?
A-
Yes B-
No
Q19 witch IDE require for apple software development?
A-
Notepad B-
Dream viewer C- XCode D- Wamp
Q20 Xcode run in windows pc virtually?
A-
Yes B-
No
Data Mining Architecture
Data Mining Architecture
Data mining is a core component of SQL Server Analysis Services (SSAS) 2012. Data mining is baked into SSAS's multidimensional designer and delivery architecture. The data structures are stored in the same database as SSAS analytical cubes, but they share only a few of the project assets.To define a data mining model in SQL Server Data Tools (SSDT), you need to create an SSAS multidimensional project, but you don't need to define any cubes or dimensions. A mining model can get its data directly from any data source or database table defined in the project's data source view, as Figure 1 shows.
Figure 1: Examining the Data Mining Architecture
Data Mining Tools
When data mining was first introduced, the only way to create and use a model was through the Business Intelligence Development Studio (BIDS), which was a database development tool rather than an application suited for data analysts. Several data-mining viewers were also developed so that a mining model could be viewed graphically, but all these viewers were baked into the development environment and not accessible to business users. Programmers could integrate some of these viewers into custom applications, but that wasn't done very often.When Microsoft introduced two data mining add-ins (Data Mining Client and Table Analysis Tools) for Microsoft Excel 2007, data mining was brought to the business community. Many of the model viewers used in the development environment were integrated into the Excel add-ins, along with several features that use Excel's native charts, pivot tables, filters, slicers, and conditional formatting capabilities.
Since then, Microsoft has been providing tools that let business users do their own analyses. Data mining remains a core component of SSAS 2012, but the target audience for the design and delivery tools has shifted from the IT developers to business users, with Excel being the delivery vehicle. The latest data mining add-ins for Excel 2013, which were introduced with SQL Server 2012 SP1, have been enhanced and improved. Business users can use them to create and consume data mining models and to perform advanced predictive analyses.
A Guided Tour
In the following short tour, I'll introduce you to the Data Mining Model Designer in SSDT and the data mining add-ins for Excel 2013. If you want to follow along, I provided a sample database that I derived from real data obtained from the National Oceanic and Atmospheric Administration (NOAA). The database contains weather observations and climatic events—including tornados, hurricanes, tsunamis, earthquakes, and volcanoes—that have occurred over the past 40 years. It's more interesting to work with real information, but I make no guarantee about the accuracy or reliability of this data, so you shouldn't use it as the basis for making any decisions.To follow along, you need to have:
- The Developer or Enterprise edition of SQL Server 2012 SP1, with the relational database engine, SSAS in multidimensional storage mode, and the client tools installed either locally on a single development machine or on a server to which you have administrative access
- An SSAS instance (installed locally on a single development machine or a server) on which you have permission to create databases and objects
- Access to a SQL Server relational instance that can read and process data for the mining structures
- Excel 2013 (32 bit or 64 bit) installed
- Download and install the Microsoft SQL Server 2012 SP1 Data Mining Add-ins for Microsoft Office from the Microsoft Download Center.
- Download and restore the sample Weather and Events database by clicking the Download the Code button near the top of the page.
Using SSDT's Data Mining Model Designer
In the following example, I'll walk you through creating a data mining project in SSDT. The data mining structure and data mining model that you'll create and explore will deal with tornado data from the U.S. states that are in "Tornado Alley," a region known for a high number of seasonal tornados. Those states are:- Kansas (KS)
- Missouri (MO)
- Nebraska (NE)
- Oklahoma (OK)
- South Dakota (SD)
- Texas (TX)
Step 1: Create a New Data Mining Project
The first step is to create a new data mining project. To do so, open SSDT, select New on the File menu, and choose Analysis Services Multidimensional and Data Mining Project. Name both the project and the solution Weather and Events.Step 2: Prepare the Data
The next step is to prepare the source data by simplifying, grouping, aggregating, and cleansing it. Don't underestimate the importance of this step. Data preparation is usually an iterative process. Start with small and simple sets of data. Create views or transform source data into separate tables, and don't be afraid to create multiple sets of data in different structures. Some mining models work best with values in separate columns, whereas other mining models work better with different attribute values in the same column. For ongoing analyses and complex data sources, your solution might need to include an extraction, transformation, and loading (ETL) process using SQL Server Integration Services (SSIS) packages.The data preparation for this sample project has been completed for you. The views I've created in the Weather and Events database include data transformation logic, so this data is in the correct format for the analyses you'll perform.
Step 3: Add the Data Source to the Project
At this point, you need to add the Weather and Events database as a data source in your project. In SSDT's Solution Explorer, right-click the Data Sources folder and select New Data Source to start the Data Source Wizard.In the Data Source Wizard, click Next, then New to add a new data source. In the Connection Manager dialog box, connect to the relational database server and select the Weather and Events database, as Figure 2 shows. Click OK in the Connection Manager dialog box, then click the Next button.
Figure 2: Adding the Weather and Events Database as a Data Source
Figure 3: Selecting the Type of Authentication to Use
After you select your authentication method in the Impersonation Information page, click the Finish button. In the next page, accept the default data source name and click Finish to add the data source and close the Data Source Wizard.
Step 4: Add the Views
As I mentioned previously, the Weather and Events database already includes the views for this sample project. To add the views to your project, right-click the Data Source Views node in Solution Explorer and choose New Data Source View.When the Data Source View Wizard appears, click the Next button three times so that you're on the Select Tables and Views page. In the Available objects list on this page, select the six objects highlighted in Figure 4, then click the top-most button between the two list boxes to move the selected views to the Included objects list. Click Next, then click Finish on the following page to add the views and close the wizard.
Figure 4: Adding the Views
Figure 5: Setting the Logical Primary Key for One of the Data Source Views
Step 5: Create a Data Mining Structure
You're now ready to create a data mining structure that will have one new mining model. Right-click the Mining Structures node in Object Explorer and select New Mining Structure. When the Data Mining Structure Wizard appears, click Next twice so that you're on the Create the Data Mining Structure page. As Figure 6 shows, there are nine mining model algorithms included in the Microsoft data mining framework. Each algorithm applies a unique set of mathematical formulas, logic, and rules to analyze data in the mining structure. Think of each as a separate black box, capable of analyzing a set of data and making predictions in different ways. This sample project uses the Microsoft Time Series algorithm, so select that algorithm from the drop-down list, then click Next twice to go to the Specify Table Types page. In the Input tables list on this page, select the Case check box for the vw_TornadosByYearByState view and click Next.
Figure 6: Selecting the Mining Model Algorithm
Figure 7: Specifying the Training Data
In the Specify Columns' Content and Data Type page, change the data type for the KS, MO, NE, OK, SD, and TX columns from Long to Double. Leave the Year column set to Long, because the time series works best with a floating point data type. (It might return errors with long integer values.) Click Next.
In the Completing the Wizard page, you need to give the mining structure and mining model appropriate names. The mining structure will become the container for multiple models, and each model uses a specific model algorithm that should be incorporated into the name. The name of the structure should also reflect the name of the table or view on which it's based.
For this example, modify the default names so that the mining structure is named Tornados By Year By State and the mining model is named Time Series - Tornados By Year By State. Click Finish to create the data mining structure.
Step 6: Process and Explore the Mining Structure
With the mining structure created, it's time to process and explore it. On the Mining Models tab in the Data Mining Model Designer, right-click the Microsoft_Time_Series box and select Process Mining Structure and All Models, as Figure 8 shows.
Figure 8: Choosing the Option to Process the Mining Structure and Its Model
Figure 9: Watching the Progress in the Processing of the Mining Structure and Its Model
When the Mining Model Viewer is displayed, you'll see a line chart like that in Figure 10, which shows historical and predicted tornado data by year for the states in Tornado Alley. Specifically, it shows the number of tornados (as a percentage of deviation from a baseline value) in each state from 1973 through 2011, with predictions for five more years. The first thing you're likely to notice is a rather tall spike prediction for Kansas. We know that this prediction is wrong because it was forecasting the future from 2011 and we know that there wasn't roughly a 5,000 percent increase in tornados (i.e., nearly 500 tornados) in Kansas in 2012. This brings us back to Dr. Box's statement that "all models are wrong but some are useful." This one isn't correct or useful. I'll deal with this a little bit later. For now, clear the check box next to KS. As you can see in Figure 11, the projected trend is much better now.
Next, clear all the check boxes, except for SD, which will isolate the results for South Dakota. Use the Prediction steps option to increase the prediction steps to 25. Notice that you're now projecting future tornado patterns 25 years into the future, to the year 2036. It's important to note that unless there's a very strong and regular pattern in the historical data, the time series algorithm might not be accurate beyond a few periods. However, looking at several periods will help you spot a predicted pattern and verify that the time series algorithm is doing its job.
Check the Show Deviations box to display the range of confidence in the accuracy of the predicted values. Figure 12 shows the results. South Dakota has had a fairly regular pattern of tornado activity from 1973 to 2011, which gives the time series algorithm a lot to work with. Even if you were to move the line to the upper or lower end of the deviation range, you could still see the predicted pattern.
Now, back to Kansas. Remember the big spike predicted for 2012? Clearly, the time series algorithm is having problems making a prediction with this data when using the default settings. This scenario is actually very common, and you just need to offer some guidance to get it on the right track.
Every one of the nine Microsoft data mining algorithms has a different set of parameters that do different things. These are the knobs and switches that control the behavior of the complex mathematical processes and rules used to make predictions. There are a lot of complex details that warrant further discussion and a deeper understanding. Many of these settings are covered in depth in the book Data Mining with Microsoft SQL Server 2008 (Wiley Publishing, 2009) by Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat. Making adjustments to these settings can either make a model work well or make the model go crazy. I encourage you to experiment with different settings by making a change and reprocessing the model. It can be time consuming, but this is an important part of the process for creating a useful data mining solution.
For this project, switch to the Mining Models tab, right-click the Microsoft_Time_Series box, and select Set Algorithm Parameters. Note that the default settings for the MAXIMUM_SERIES_VALUE and MINIMUM_SERIES_VALUE parameters are huge numbers. By leaving these unconstrained, the model algorithm is blowing a fuse and giving crazy results. Change MAXIMUM_SERIES_VALUE to 200 and MINIMUM_SERIES_VALUE to 0, then click the OK button to save the settings.
Reprocess and browse the model. This time the prediction results for KS are in a moderate range. If you increase the number of prediction steps, you'll see that the model seems to be making a reasonable set of predictions for annual tornado counts for the next 25 years. However, if you select the Show Deviations check box, you'll see that the algorithm has very little confidence in its ability to make a prediction with the information provided, as Figure 13 shows.
Why can't this model predict the future of tornado activity in Kansas? I posed this question to Mark Tabladillo, who does a lot of work with predictive modeling and statistical analysis. He said, "Typically, we do not get 'whys' in data mining." It's often necessary to create multiple models with different filters and variables to validate a pattern and a reliable prediction. The desire to explain "why" is human nature, but a scientific explanation might not always be possible. According to Tabladillo, "Correlation and causality are different, and most data mining results are correlation alone. Through time and patience, we can make a case for causality, though people, from academics to news reporters, are tempted to jump to a causal conclusion, either to project that they have done that requisite homework or simply to be the first mover-of-record."
In this case, it might be that Kansas doesn't have a strong fluctuating pattern of annual tornado counts like South Dakota does. Keep in mind that, so far, you're considering only the absolute count of all tornados in each state, aggregated over a year. You're not considering other attributes such as each tornado's category, strength, or duration or the damage caused by each tornado. This information is in the data and can be used to create more targeted models.
abbreviations in android development
apk – Android package
dex – Dalvik Executable (Compiled Android application code file)
adb – Android Debug Bridge
DDMS – Dalvik Debug Monitor Service
AOSP – Android Open Source Project
GPS – Global Positioning System
IMEI – International Mobile Equipment Identity
LTE – Long Term Evolution
MTP – Media Transfer Protocol
NFC – Near Field Communication
OEM – Original Equipment Manufacturer
OTA – Over the Air
PPI – Pixels per inch
ROM – Read Only Memory
SDK – Software Development Kit
USB – Universal Serial Bus
VM – Virtual Machine
dex – Dalvik Executable (Compiled Android application code file)
adb – Android Debug Bridge
DDMS – Dalvik Debug Monitor Service
AOSP – Android Open Source Project
GPS – Global Positioning System
IMEI – International Mobile Equipment Identity
LTE – Long Term Evolution
MTP – Media Transfer Protocol
NFC – Near Field Communication
OEM – Original Equipment Manufacturer
OTA – Over the Air
PPI – Pixels per inch
ROM – Read Only Memory
SDK – Software Development Kit
USB – Universal Serial Bus
VM – Virtual Machine
objective question android
Android Objective type Question and Answers
1) Once installed on a device, each Android application lives in_______?
a)device memory
b)external memory
c) security sandbox
d)None of the above
Ans) c
2)Parent class of Activity?
a)Object
b)Context
c)ActivityGroup
d)ContextThemeWrapper
Ans) d
3)What are the Direct subclasses of Activity?
a)AccountAuthenticatorActivity
b) ActivityGroup
c) ExpandableListActivity
d) FragmentActivity
e) ListActivity
f) all the aove
Ans) f
4)What are the indirect Direct subclasses of Activity?
a)LauncherActivity
b)PreferenceActivity
c) TabActivity
d)All the above
Ans) d
5)Parent class of Service?
a)Object
b)Context
c) ContextWrapper
d)ContextThemeWrapper
Ans) c
6)What are the indirect Direct subclasses of Services?
a) RecognitionService
b) RemoteViewsService
c)SpellCheckerService
d)InputMethodService
Ans) d
7)Which component is not activated by an Intent?
a)Activity
b)Services
c)ContentProvider
d)BroadcastReceiver
Ans) c
8)When contentProvider would be activated?
a)Using Intent
b)Using SQLite
c)Using ContentResolver
d)None of the above
Ans) c
9)Which of the important device characteristics that you should consider as you design and develop your application?
a)Screen size and density
b)Input configurations
c)Device features
d)Platform Version
e)All of the above
Ans) e
10)Which are the screen sizes in Android?
a)small
b)normal
c)large
d)extra large
e)All of the above
Ans) e
11)Which are the screen densities in Android?
a)low density
b)medium density
c)high density
d)extra high density
e)All of the above
Ans) e
12)You can shut down an activity by calling its _______ method
a)onDestory()
b)finishActivity()
c)finish()
d)None of the above
Ans) c
13)What is the difference between Activity context and Application Context?
a) The Activity instance is tied to the lifecycle of an Activity.
while the application instance is tied to the lifecycle of the application,
b) The Activity instance is tied to the lifecycle of the application,
while the application instance is tied to the lifecycle of an Activity.
c) The Activity instance is tied to the lifecycle of the Activity,
while the application instance is tied to the lifecycle of an application.
d) None of the above
Ans) a
14)Which one is NOT related to fragment class?
a)DialogFragment
b)ListFragment
c)PreferenceFragment
d)CursorFragment
Ansa)d
15)Definition of Loader?
a) loaders make it easy to asynchronously load data in an activity or fragment.
b) loaders make it easy to synchronously load data in an activity or fragment.
c) loaders does not make it easy to asynchronously load data in an activity or fragment.
d) None of the above.
Ans) a
16)Characteristics of the Loaders?
a)They are available to every Activity and Fragment.
b)They provide asynchronous loading of data.
c)They monitor the source of their data and deliver new results when the content changes.
d)They automatically reconnect to the last loader's cursor when being recreated after a configuration change. Thus, they don't need to re-query their data.
e)All of the above.
Ans) e
17)How many ways to start services?
a)Started
b)Bound
c)a & b
d)None of the above.
Ans) c
18)If your service is private to your own application and runs in the same process as the client (which is common), you should create your interface by extending the ________class?
a) Messenger
b) Binder
c) AIDL
d)None of the above
Ans) b
19)If you need your interface to work across different processes, you can create an interface for the service with a ________?
a)Binder
b)Messenger
c)AIDL
d) b or c
Ans) d
20)AsyncTask allows you to perform asynchronous work on your user interface. It performs the blocking operations in a worker thread and then publishes the results on the UI thread.
a)true
b)false
Ans) a
21)Layouts in android?
a)Frame Layout
b)Linear Layout
c)Relative Layout
d)Table Layout
e)All of the above
Ans) e
22) Dialog classes in android?
a)AlertDialog
b)ProgressDialog
c)DatePickerDialog
d)TimePickerDialog
e)All of the above
Ans) e
23)If you want share the data accross the all applications ,you should go for?
a)Shared Preferences
b)Internal Storage
c)SQLite Databases
d)content provider
Ans) d
24)Difference between android api and google api?
a)The google API includes Google Maps and other Google-specific libraries. The Android one only includes core android libraries.
b)The google API one only includes core android libraries. The Android includes Google Maps and other Google-specific libraries.
c)None of the above.
Ans) a
Subscribe to:
Comments (Atom)
Featured post
-
https://youtu.be/XX4NIBu9mPk
-
A ny Android project contain things such as application source code and resource files. Some are generated for you by default, while others...