homefeatured newsline Open Source Projects openCV pi3 Products rasbian raspberry Raspberry pi rpi tech

Home automation with OpenCV 4

Home automation with OpenCV 4

Let’s build a system capable of detect the movement of people and objects by means of a video digital camera and ship alarm e-mails with Raspberry Pi Three B +.

 

The so-called Pc Vision or artificial imaginative and prescient is a analysis sector that deals with how computers can interpret visual reality exactly like a human being; it uses complicated algorithms able to processing still or shifting photographs, in an effort to present indications and knowledge on individuals and objects, simply as our perceptive system would do.

From a practical perspective, the pc vision developed by a computer tries to automate the activities that the human visual system naturally performs. The appliance fields of Pc Imaginative and prescient vary from environmental digitalization, the digital reconstruction of locations and situations, to the recognition and visible monitoring of objects and other people.

Real-time image acquisition and processing of visual info often require high-resolution optical units, powerful computer systems and tailor-made software program.

In this article we’ll current an software of OpenCV 4, that’s the newest launch of the widespread free library, launched beneath a BSD license (Berkeley Software program Distribution), which allows the development of synthetic vision purposes, even very complicated ones.

 

THE OPENCV LIBRARY Four

First, let’s see the primary options of the official OpenCV website (https://opencv.org).

  • OpenCV is now a C ++ 11 library and requires a C ++ compliant compiler. 11. The minimum CMake model required has been upgraded to 3.5.1.
  • Many OpenCV 1.x API C’s have been removed.
  • Persistence (storing and loading of structured knowledge from/to XML, YAML or JSON) in the primary module has been utterly re-implemented in C ++ 11 and has also misplaced the C API.
  • The brand new G-API module has been added, which acts as a very environment friendly processing pipeline engine for graph-based pictures.
  • The dnn module has been up to date with Deep Learning Deployment Toolkit from the OpenVINO ™ R4 toolkit. Seek the advice of the information on the best way to build and use OpenCV with DLDT help.
  • The dnn module now consists of the experimental Vulkan backend and helps networks in ONNX format.
  • The well-known Kinect Fusion algorithm has been carried out and optimized for CPU and GPU (OpenCL).
  • A QR code detector and decoder was added to the thing module
  • The high-efficiency DIS dense optical movement algorithm has been moved from opencv_contrib to the video module.
  • More details might be found in the bulletins of the earlier versions: Four.Zero-alpha, 4.0-beta, Four.0-rc and within the changelog.

 

We recall that OpenCV is out there for C ++, Python and Java interfaces and helps Home windows, Linux, Mac OS, iOS and Android platforms. For our undertaking, we’ll use the Linux / Ubuntu model optimized for Raspberry Pi 3B + and Python 3 interface.

The choice to develop a home automation undertaking with OpenCV and the Raspberry Pi board is predicated on two easy causes: the first is said to the compactness of the microcomputer, which may easily be housed in a small container, maybe printed at residence with the 3D printer, the second is said to the presence of the Raspberry Pi GPIO port to which we will simply connect management and alarm units.

Finally, let’s add the convenience of creating in a Python setting, which is already built-in into its operating system.

 

RASPBIAN STRETCH

Before putting in OpenCV 4 we thought it might useful to attempt it on the newest version of Raspbian. From the official web site of Raspberry Pi (https://raspberrypi.org) we advocate downloading the picture of Raspbian Stretch within the version “with desktop and really helpful software program”.

This version is strongly advisable because, among the many system improvements, there’s additionally the Italian localization of the desktop and the linguistic help for some purposes. Furthermore, the initial wizard permits you to shortly configure the working surroundings corresponding to for instance, the Italian keyboard, time zone, Wi-Fi community and extra.

For many who know little concerning the Raspberry Pi setting and don’t know how one can create an SD card burned with the Raspbian operating system, a superb Assist part is accessible on the official web site.

 

VNC VIEWER

Even for those who can function immediately with a mouse, a keyboard and an HDMI monitor, related to the Raspberry Pi board, often you favor to create a connection to the Raspbian desktop via an SSH or VNC connection.

In this means, after configuring the Raspberry Pi Wi-Fi or Ethernet community, you possibly can access the distant desktop utilizing one of the many SSH terminals or via the handy VNC Viewer, downloadable free of charge from the official web site https: //www.realvnc. com is obtainable for all platforms. For the report, we now have used the Home windows PC model, however the use is identical for all other OSs.

CONFIGURATION OF RASPBERRY PI INTERFACES

Earlier than configuring the VNC connection it’s essential to activate the VNC interface from the Preferences> Raspberry Pi Configuration menu of Raspbian, as shown in Fig. 1. To do this, after you have opened the Raspberry Pi Configuration window, just click on the item VNC.

 

Fig. 1

 

In the identical window, you can even activate all the other interfaces, together with the Digital camera interface, as proven in Fig. 2. The opposite interfaces, even when they don’t seem to be wanted for this challenge, may be helpful in the future.

 

Fig. 2

 

To make use of the VNC connection it is advisable to know the IP tackle that was mechanically assigned to the board at the time of the primary WLAN or LAN configuration. If you want to reap the benefits of the board’s Wi-Fi network, merely open the Network Preferences window (Fig. Three) the place the tackle assigned by the WLAN connection is visible.

 

Fig. Three

 

VNC CONNECTION

To create a brand new connection with VNC Viewer, simply enter the IP tackle of the Raspberry Pi board in the Properties window, as proven in Fig. 4. Once the VNC connection has been started, the consumer identify and authentication password will probably be requested.

 

Fig. Four

 

which, by default, are “pi” and “raspberry” respectively (Fig. 5).

As soon as the connection is made, the Raspbian desktop can be visible in full display on the PC monitor. From this moment on, you’ll be able to disconnect the mouse, keyboard and monitor from the Raspberry Pi board and have complete distant management. It might occur that the IP handle assigned by the DHCP modifications, so it is advisable to configure a static IP within the Raspberry Pi board.

 

Fig. 5

 

RASPBERRY PI 3B + E CAMERA MODULE V2

For this venture we opted for the Raspberry Pi 3B + board and for the Raspberry Pi Digital camera Module v2. Version 2 of the digital camera gives a decision of up to 8 Megapixels and three,820 x 2,464-pixel resolution of the sensor. For all the other features, see the web page on the official web site https://www.raspberrypi.org/documentation/hardware/camera. The Digital camera Module v2 have to be related as proven in Fig. 6, using the flat cable provided and taking note of the path of insertion to the CSI port of Raspberry Pi Three B +.

 

Fig. 6

 

VIDEO CAMERA TEST

To see if the digital camera works correctly with Python, we advocate that you simply carry out this easy check. In a new Python editor window write these few strains of code:

 

 

When the script begins, a preview window ought to open and close after 10 seconds.

If this does not occur, verify that the flat cable is correctly inserted in the best course in the slot. If every part works correctly, you possibly can easily proceed with the set up of OpenCV 4.

 

INSTALLING OPENCV 4 ON RASPBIAN STRETCH

As with the earlier variations, OpenCV 4 also requires a specific set up. So, in case you don’t have the aforementioned Raspbian Stretch operating system, you might want to update the working system to reap the benefits of the brand new options.

Warning! OpenCV Four has not been examined on variations of Raspbian prior to Stretch.

Given the younger age of OpenCV Four, we didn’t find many on-line guides, so we relied on the experience of Adrian Rosebrock who runs his Deep Studying weblog at https://www.pyimagesearch.com and Satya Mallick who runs the location https://www.learnopencv.com dedicated to Pc Imaginative and prescient and Machine Learning. Each report about the same set up procedure.

Observe that for a lot of terminal commands you have to use the tilde (~) character. If it isn’t current on the keyboard, use the SHIFT + CTRL + u keys and enter the hexadecimal code 7e + ENTER.

 

INSTALL OPENCV DEPENDENCIES 4

Earlier than starting any set up on Raspbian it is all the time advisable to replace the repositories with the apt-get command, opening a terminal window:

sudo apt-get replace && sudo apt-get upgrade

 

So, all the time with apt-get, you should set up all the development instruments, including the newest version of CMake:

sudo apt-get set up build-essential cmake unzip pkg-config

 

Subsequent, we set up a number of libraries for picture and video processing:

sudo apt-get set up libjpeg-dev libpng-dev libtiff-dev

sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

sudo apt-get set up libxvidcore-dev libx264-dev

 

So, let’s set up the GTK toolkits for the graphical interface:

sudo apt-get install libgtk-Three-dev

sudo apt-get install libcanberra-gtk *

(the asterisk will purchase the GTK ARM specification)

 

At this point you want two packages that include numerical optimizations for OpenCV:

sudo apt-get set up libatlas-base-dev gfortran

 

Finally, we install the Python Three improvement instruments:

sudo apt-get set up python3-dev

After installing all the conditions, you’ll be able to obtain OpenCV Four.

 

OPEN CV4 DOWNLOAD

It is preferable to download the OpenCV4 archives in the Residence folder. All OpenCV 4 libraries can be found in two repositories Github referred to as OpenCV and opencv_contrib. The contrib repository accommodates further modules created by users.

Listed here are the instructions to sort to return to the Residence folder and download the 2 repositories.

cd ~

wget -O opencv.zip https://github.com/opencv/opencv/archive/4.0.0.zip

wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.0.0.zip

As soon as the zip archives have been downloaded, they have to be decompressed within the Residence folder again:

unzip opencv.zip

unzip opencv_contrib.zip

 

It will create the directories opencv-4.0.Zero and opencv_contrib-4.Zero.0. For sensible reasons it is advisable to rename the folders as opencv and opencv_contrib:

mv opencv-Four.Zero.Zero opencv

mv opencv_contrib-4.0.Zero opencv_contrib

 

At this level, before the actual compilation of the OpenCV Four library, it is necessary to set up the virtual setting of Python 3.

 

CONFIGURE THE VIRTUAL ENVIRONMENT OF PYTHON 3

In case you are not conversant in Python’s virtual environments and to know why it’s advisable to work on a digital surroundings see the devoted field.

First, you must set up pip:

wget https://bootstrap.pypa.io/get-pip.py

sudo python3 get-pip.py

 

Then we set up virtualenv and virtualenvwrapper which permit us to create virtual Python Three environments:

sudo pip install virtualenv virtualenvwrapper

sudo rm -rf ~/get-pip.py ~/.cache/pip

 

To finish the installation of those instruments, you’ll want to update the ~ / .profile file, using these simple echo commands:

echo “export WORKON_HOME=$HOME/.virtualenvs” >> ~/.profile

echo “export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3” >> ~/.profile

echo “source /usr/native/bin/virtualenvwrapper.sh” >> ~/.profile

 

The additions to the profile indicate the trail to the virtualenvs workbook, created by the virtualenv software in the House and the path to the virtualwrapper script situated in the / usr / bin folder. As soon as the profile is updated, simply activate it with the following co-command:

source ~/.profile

 

CREATING A VIRTUAL ENVIRONMENT TO CONTAIN OPENCV 4 AND ADDITIONAL PACKAGES

Now you possibly can create an OpenCV 4 digital surroundings for Python 3 and work independently from other environments.

This command-line simply creates a virtual Python 3 surroundings referred to as “cv”.

mkvirtualenv cv -p python3

 

Any identify could be given to the virtual surroundings, however it’s advisable, for practical reasons, to maintain a short identify. If the profile is appropriately activated and the “cv” digital setting is created, we will confirm that we’re in the “cv” surroundings utilizing the wor-kon command, as indicated by the arrow in Fig. 7:

workon cv

 

Fig. 7

 

INSTALLATION OF NUMPY

The Python package deal required by OpenCV Four is NumPy. To install it, just sort the next command:

pip set up numpy

 

CMAKE AND COMPILATION OF OPENCV 4

To compile OpenCV Four, we’ll use CMake, adopted by make. This is probably the most time-consuming step. To begin with, go back to the OpenCV folder within the Residence and create a build subdirectory:
cd ~/opencv

mkdir construct

cd build

 

Then run CMake to create the release build.

cmake -D CMAKE_BUILD_TYPE=RELEASE

     -D CMAKE_INSTALL_PREFIX=/usr/native

     -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules

     -D ENABLE_NEON=ON

     -D ENABLE_VFPV3=ON

     -D BUILD_TESTS=OFF

     -D OPENCV_ENABLE_NONFREE=ON

     -D INSTALL_PYTHON_EXAMPLES=OFF

     -D BUILD_EXAMPLES=OFF ..

 

Notice that the backslash character is used to continue the command with a newline. Also, word the presence of the path for compiling the extra opencv_contrib modules.

 

INCREASE SWAP ON RASPBERRY PI

Earlier than starting the precise compilation it’s advisable to increase the swap area. This is to stop the compilation from being interrupted resulting from memory exhaustion. To do that it is enough to briefly change the swap file situated on this path / and so on / dphys-swapfile:

sudo nano / and so forth / dphys-swapfile

 

… after which change the variable CONF_SWAPSIZE from 100 to 2,048 MB. With the # character the line is commented:

# CONF_SWAPSIZE = 100

CONF_SWAPSIZE = 2048

 

If you don’t perform this step, it is rather doubtless that the board hangs during compilation. Once the swap file has been modified, the swap service have to be stopped and restarted:

sudo /and so forth/init.d/dphys-swapfile stop

sudo /and so on/init.d/dphys-swapfile start

 

COMPILATION OF OPENCVES 4

Now every little thing is ready to construct OpenCV 4. Simply sort the next command:

make -j4

 

Observe that the – j4 choice specifies using Four cores for compilation. If compilation errors occur or the board crashes, you possibly can attempt without the -j4 choice. Usually, the method of compiling OpenCV 4 is sort of long and resource-intensive, so it’s advisable to take a break and let the formwork with out the exterior intervention of any sort, or without touching a mouse or keyboard. On the end of the compilation, if every little thing went nicely, you’ll be able to see the 100% compilation proportion (Fig. 8).

 

Fig. 8

 

Now we simply have to put in OpenCV Four with two typical commands:

sudo make install

sudo ldconfig

 

Don’t overlook to restore the swapfile, returning it to 100 MB, and restarting it:

sudo nano / and so forth / dphys-swapfile

CONF_SWAPSIZE = 2048

sudo /and so forth/init.d/dphys-swapfile cease

sudo /and so on/init.d/dphys-swapfile begin

 

SYMBOLIC CONNECTION

One final essential factor is the symbolic connection of OpenCV Four with the package deal listing of our digital setting. It’s essential to enter the site-packages listing of the digital setting and hyperlink to the cv2.so library. Listed here are the instructions:

cd ~ / .virtualenvs / cv / lib / python3.5 / site-packages /

ln -s /usr/native/python/cv2/python-3.5/cv2.cpython-35m-arm-linux-gnueabihf.so cv2.so

cd ~

 

Notice the –s choice to ln which stands for symbolic.

If you do not perform this step, OpenCV 4 won’t recognize the packages of the Python Three virtual setting. In this regard, verify that the hyperlink to the library is current in the listing ~ / .virtualenvs / cv / lib / python3.5 / site-packages cv2.so.

 

CHECKING THE OPENCV INSTALLATION Four

To see if OpenCV Four has been put in appropriately, staying inside the digital cv setting, run the following from the terminal:

python

>>> import cv2

>>> cv2 .__ version__

‘Four.0.Zero’

>>> exit ()

 

As proven in Fig. 9, the primary command opens the Python 3 interpreter related to the cv surroundings.

 

Fig. 9

 

The import cv2 command imports the library and the cv2 .__ version__ command exhibits the Four.0.Zero version of the library. With the exit command () you exit the interpreter and return to the terminal.

Keep in mind that when the system restarts, you have to reactivate the digital surroundings (in our case “cv”) and then begin working, with the workon cv command:

source ~ /. Profile

workon cv

 

To use the IDLE interface of Python 3 inside the digital setting it is essential to sort the next command:

python -m idlelib.idle

 

With the IDLE interface open, you possibly can create new scripts or open those you will have already achieved. When you attempt to open a script that imports the cv2 library outdoors the virtual setting, the Traceback will seem with the error message:

ImportError: No module named ‘cv2’.

VIDEO SURVEILLANCE WITH OPENCV 4

We have now considered this challenge as a legitimate various to alarm methods based mostly on PIR sensors, ultrasonic sensors, contact sensors and so forth. Once a motion is detected, the script pictures the intruder and sends the photograph to an e-mail tackle to determine the individual.

Regardless of the function, we created the script based mostly on OpenCV Four to seize the motion for sending emails to an e-mail handle. At the similar time, the detected movement prompts a GPIO port to which a relay and an alarm system might be related. Raspbian consists of libraries for managing mail and the GPIO port, so there isn’t a need to install anything.

Under we touch upon the salient elements of the script. Libraries imported originally of the script are used only for SMTP management, or for sending e-mails by way of a recognized mail server. For this use, it is vital to connect Raspberry Pi to the WiFi or Ethernet community.

from smtplib import SMTP_SSL as SMTP      

from e-mail.mime.textual content import MIMEText

from e mail.MIMEMultipart import MIMEMultipart

from e mail.MIMEBase import MIMEBase

from e-mail import Encoders

 

Within the following part, the parameters for the SMTP server have to be modified. It’s advisable to make use of the SMTP server normally used for sending mail from your own home pc. The sender and vacation spot variables include the addresses of the sender and recipient, ie the handle with which the e-mails and the handle to which the message is to be sent are normally despatched. The username and password variables are the username and password used to send to the SMTP server. The type of message is obvious text and the content material variable signifies the content of the message while the variable topic stories the topic of the message. The msg instance inherits the methods of the MIMEMultipart library.

SMTPserver = ‘smtps.server.xxx’

sender = ‘[email protected]

vacation spot = ‘vacation [email protected] mail.com’

username = ‘[email protected]

password = ‘myPassword’

text_subtype = ‘plain’

content=’Messaggio’

subject=’Allarme’

msg = MIMEMultipart()

 

The email_send () perform is designed to ship an e mail with an attachment, which in our case is a jpg picture of the intruder or the thing that has moved in the room. As we’ll see later, the e-mail sending perform is non-compulsory and might be referred to as or not in the script. If the sending is successful, “Sending executed” can be printed on the terminal in any other case “Sending failed” will seem.

The following are the libraries that have to be imported for the management of the video digital camera, the date, the time, the video effects, the json information, cv2 (OpenCV Four) and the GPIO port.

import cv2

import time

import datetime

import imutils

import json

import RPi.GPIO as GPIO

from picamera.array import PiRGBArray

from picamera import PiCamera

 

The following directions set the GPIO26 output (you possibly can, from right here, select another one):

GPIO.setmode (GPIO.BCM)

GPIO.setwarnings (False)

GPIO.setup (26, GPIO.OUT)

 

The script offers for the importation of some parameters via a configuration file, which we now have referred to as “con-figurazione.json”. Because of the json library it’s, subsequently, attainable to parse the values ​​assigned to the configuration parameters and assign them to the variables or features. The following instruction allows the loading of the json file which is explained within the subsequent paragraph.

 

conf = json.load (open (“configuration.json”))

 

THE CONFIGURATION.JSON FILE

Often, a JSON (JavaScript Object Notation) file is used in the Javascript programming setting. Because of its versatility, it is rather typically used as a configuration file in different environments. It’s a script that helps the Javascript language and subsequently accepts the standard syntax of the language. Opening the “configuration.json” file with a easy textual content editor you’ll be able to see the parameters to be assigned to variables and features:

 

As you’ll be able to see, some parameters are set to true or false and others include numeric values ​​and arrays. By studying these parameters with the aforementioned statement conf = json.load (open (‘configuration.json’)), you’ll be able to set the features of the video surveillance script, with out having to the touch the code every time. For instance, if you don’t want to save lots of the image capture information and use video surveillance solely to activate the alarm relay, just write the next assertion within the json file:

“Use_img”: false

 

Similarly, if you don’t want to ship emails:

“Use_email”: false

 

In case you don’t need to see the preview window: 

“Video_preview”: false

 

The opposite parameters permit you to set:

  • min_time: default value Three seconds; is the minimal time to detect motion;
  • min_motion: default value eight frames; is the minimal number of frames before activating the switching on of the LED, saving the information and/or sending the e-mail. You’ll be able to improve or decrease the value to make the detection kind of sensitive;
  • threshold: default value 5; defines the sensitivity threshold for motion detection. This value may be increased or decreased relying on whether or not you need to make the contrast kind of delicate;
  • resolution: (default 640×480 pixels) is the video resolution or the dimensions of the preview frame (frame); it is suggested to go away the resolution unchanged to avoid delays in the video stream or within the sending of e-mails;
  • fps: set frames per second; it is suggested to go away the default value (25) to keep away from delays within the video stream;
  • min_area: default 5000. It is the minimal value of the world of ​​the inexperienced box that is created across the detected subject. Often, you don’t want to vary it.

 

Let’s continue with the code evaluation: the digital camera object inherits the picamera library methods. The digital camera parameters are set based mostly on the reading of the resolution and fps parameters of the json file. Notice using the tuple sort which in Python lets you create an inventory of values ​​separated by commas:

digital camera = PiCamera()

digital camera.resolution = tuple(conf[“resolution”])

digital camera.framerate = conf[“fps”]

rawCapture = PiRGBArray(digital camera, measurement=tuple(conf[“resolution”]))

 

Then some variables are outlined, which will probably be used in the script to determine the beginning and end time of frame seize.

rely = Zero

avg = None

motionCounter = 0

lastUploaded = datetime.datetime.now()

 

When the script begins, you will notice on the terminal the phrase “Startup …” then the system waits 2 seconds and begins the video capture. It must be noted that in the preview window the textual content “Present status: No motion” is superimposed on the upper part, and the date and time (under) (Fig. 10).

 

Fig. 10

 

For this function, the imutils library is used to resize the frame and the datetime library for the time from the Internet, so the time and date must be correct. With the cv2 library, you set the mask to detect a shifting object. It ought to be noted that the parameter tresh reads the edge worth, ie the edge parameter from the configuration json file. The motionFlag variable is about to False.

print “Start …”

time.sleep(2)

 

The whole for iteration serves to rely the frames and detect the variations in the gray masks that is superimposed on the captured image, as shown in Fig. 11. The value of the edge units the intervention threshold to detect the motion.

 

Fig. 11

 

for f in digital camera.capture_continuous(rawCapture, format=”bgr”, use_video_port=True):

 

 

Movement detected the topic is highlighted with green bins across the edges. When the edge value is exceeded, the text “Current status: Motion detected” will appear and the motionFlag variable is about to True.

textual content = “Motion detected”

motionFlag = True

ts = timestamp.strftime (“% A% d% B% Y% I:% M:% S% p”)

 

At this level, if the motionFlag variable is True, the comparison between the present time (currentTime) and the time set for the detection (lastTime) set by the min_time parameter within the json file begins and the rely of the movements detected with the variable starts. motionCounter. 

if text == “Movement detected”:

if (currentTime – lastTime) .seconds> = conf [“min_time”]:

motionCounter + = 1

 

If the number of actions detected exceeds that set by the min_motion parameter in the json file, you’ll be able to determine whether to save lots of the subject jpg file. This feature is about by the use_img parameter of the json file. Word that the file identify img = ‘fo-to_’ + str (rely) + ‘.jpg’ follows a progressive numbering.

The file names are saved regionally with the instruction cv2.imwrite (img, body) similar to foto_1.jpg, foto_2.jpg and so forth.

 

if motionCounter> = conf [“min_motion”]:

if conf [“use_img”]:

rely + = 1

img = ‘foto_’ + str (rely) + ‘.jpg’

cv2.imwrite (img, frame)

 

In the meantime, the LED / relay on the GPIO26 pin is activated or deactivated, while “LED ON” is printed on the terminal, depending on whether or not the movement is detected or not:

GPIO.output (26, GPIO.HIGH)

print “LED ON”.

 

or:

print (“LED OFF”)

GPIO.output (26, GPIO.LOW)

 

If the sending of the email has been set with the parameter use_email, the part object that units the MIME format (Multipurpose Internet Mail Extensions) is created to compose the mail message. The message is mechanically encoded in base64 format and then despatched by way of the email_send () perform, seen above.

The e-mail message will send the file “foto_1.jpg” and then the file “foto_2.jpg” and so forth, depending on the occasions the digital camera detects a brand new movement.

 

Fig. 12 exhibits an example of receiving an alarm e-mail.

 

 

CONCLUSIONS

That’s all for now. To get some expertise, you’ll be able to attempt modifying the itemizing to add features or change the parameters of the json file to see what happens.

We are engaged on a facial recognition challenge and emotional expressions with OpenCV4. See you soon!

 

FROM OPENSTORE

Raspberry Pi 3 Mannequin B+

Digital camera module eight Megapixel for Raspberry Pi

jlcpcb.comjlcpcb.com

https://2019.makerfairerome.eu/en/https://2019.makerfairerome.eu/en/

!perform(f,b,e,v,n,t,s)if(f.fbq)return;n=f.fbq=perform()n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments);if(!f._fbq)f._fbq=n;
n.push=n;n.loaded=!0;n.version=’2.Zero’;n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];s.parentNode.insertBefore(t,s)(window,
document,’script’,’https://connect.facebook.net/en_US/fbevents.js’);

fbq(‘init’, ‘1435112296761783’);
fbq(‘monitor’, “PageView”);(perform()
var _fbq = window._fbq || (window._fbq = []);
if (!_fbq.loaded)
var fbds = doc.createElement(‘script’);
fbds.async = true;
fbds.src = ‘//join.facebook.internet/en_US/fbds.js’;
var s = document.getElementsByTagName(‘script’)[0];
s.parentNode.insertBefore(fbds, s);
_fbq.loaded = true;

)();
window._fbq = window._fbq || [];
window._fbq.push([‘track’, ‘6021809506052’, ‘value’:’0.00′,’currency’:’EUR’]);

window.fbAsyncInit = perform()
FB.init(“appId”:”139297462792714″,”channelUrl”:”https://www.open-electronics.org/?sfc-channel-file=1″,”status”:true,”cookie”:true,”xfbml”:true,”oauth”:true);
sfc_update_user_details();
;
(perform(d)
var js, id = ‘facebook-jssdk’; if (d.getElementById(id)) return;
js = d.createElement(‘script’); js.id = id; js.async = true;
js.src = “//connect.facebook.internet/en_US/all.js”;
d.getElementsByTagName(‘head’)[0].appendChild(js);
(document));
(perform(d, s, id)
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); js.id = id;
js.src = “//join.fb.internet/en_US/all.js#xfbml=1”;
fjs.parentNode.insertBefore(js, fjs);
(document, ‘script’, ‘facebook-jssdk’));