Evolution des modèles de données. Base de données : « Une base de données est un ensemble de données modélisant les objets d’une partie du monde réel et servant de support à une application informatique. » [1] De cela, on peut dire qu’une base de données ce n’est qu’une collection structurée et inter-reliée de données issues du monde réel, répondant à quelques exigences telles que : minimiser la redondance des données, assurer leurs cohérence, les mémoriser sur un support de stockage, etc. Afin de faciliter l’accès, la gestion et la manipulation de ces bases de données ; des systèmes de gestion de données (SGBD) sécurisés ont été développés au cours des années suivant différents modèles de données et supportant différents types de données. Selon [3] un SGBD peut être défini comme « un outil informatique permettant la sauvegarde, l’interrogation, la recherche et la mise en forme de données stockées sur mémoi
Posts
- Get link
- X
- Other Apps
Text Sentiment Analysis Now, after gathering data from any data source and treating it you may want to do some sentiment analysis of text data, so to understand the opinion of people about your brand. (0) Neuters; (1) Positive; (-1) Negative. Actually, to do an accurate sentiment analysis we should prepare machine learning models, but this is a hard work and it takes a long time to perform it. That’s why; in python we have some libraries that can do this analysis for us. In this tutorial, I’ll provide you a script that helps you to analyze the sentiment of your customers or followers very simply. Imagine that you have extracted retweets of your Twitter account and you saved them in a CSV file. As I shown you it the previous articles; We install these libraries: pip install textblob // this library is used for translating text, analyze sentiments … pip install textblob_fr //this is reserved for French text In this code I’ll explain how to use the 2 differ
- Get link
- X
- Other Apps
Twitter Streaming Data Hi every one, This time I'll explain you how to gather twitter's Data in real-time. If you remember, in the last article I showed you how to scrape data from Twitter in batch mode, and I explained you how to create a twitter APP. So I invite you to read this article before starting this one because, you should have created your twitter App. Now, That you have created you App. You can begin gathering data in streaming. ie. That you will capture the data at the moment of it's posting in twitter. In our case, I will use Python 2.7, so we should install the library: tweepy Pip install tweepy If we want to store data into CSV file: The detailed code is published in: https://github.com/dihiaselma/Tweeter/blob/master/Tweeter_Stream_CSV.py and it contains the explanation In the code, you should paste the access token and keys that you get from your Twitter app above. If we want to store data into Hbase database: Hbase
- Get link
- X
- Other Apps
Tweets search In this article I will explain how to gather the different tweets according to key words in Batch mode. The data will be stored in CSV file. For this, you should create an account in twitter developer. Then, you create a twitter app. For this, follow these steps: Connect to: https://apps.twitter.com/ clic on create new App. provide the information about your APP. For the URL: you can put (http://www.example.com) just to validate the subscription now you can move to the Keys and Access Tokens tab. in order to generate the tokens and keys. Now, you should install this library: TwitterSearch Pip install TwitterSearch In the code, you should paste the access token and keys that you get from your app above. (I've specified you where to paste them inside the code) And, you can run this code : https://github.com/dihiaselma/Tweeter/blob/master/TweeterSearch.py
- Get link
- X
- Other Apps
YouTube Channel Scrapper In the previous articles I showed you how to scrape data from Facebook. Today, we will move together to YouTube, and I'll explain you how to get the different information about a YouTube given channel. I provide you the python code which allows you to gather these data. You should know that it works with python 3.6 not python 2.7. The data will be stored in a CSV file. First of all, you should connect to https://console.cloud.google.com in order to create a project. And follow these steps: 1. clic on 'select a project' 2. Create a new project 3. I named it 'Test' 4. now clik on open. 5. clic on API and services, than select Dashboard 6. clic on 'activate API & Services' 7. now, choose YOUTUBE Data API v3 8. And Activate it 9. After that, clic on create Keys and get it. Now copy the API key, in order to paste it in the python code.