As Mersa Technology, we are developing our own flight control system by leveraging our in-house capabilities in algorithm architecture development, electronic circuit board development, embedded and interface software development.
Our goal is to eliminate external dependency and cater to the demands of local users, particularly in the context of agricultural drones. We achieve high domestic content in drone production by manufacturing the most critical components, including the autopilot module and software, as well as the circuit board for power distribution control, embedded software, and interface control and tracking software.
Mersa Technology is a 100% Turkish-owned private company that has been developing software, hardware, and control solutions for Unmanned Aerial Vehicles (UAVs) since 2021.
The company is focused on flight control solutions to be used across a wide spectrum of Unmanned Aerial Vehicles, commonly known as drones, including remotely piloted aircraft systems.
Among its primary goals is to ensure the reliability and robustness of flight control solutions across various platforms and changing weather conditions. The usage areas for autopilot software and hardware encompass high-performance tactical UAVs, drones, VTOLs, and mini UAVs.
Mersa Technology is particularly engaged in designing, developing, and sustaining modular autopilot systems for the education, agriculture, security, and defense sectors. The company aims to develop open-source autopilot software and hardware by conducting global efforts to create a better, more efficient, and safer infrastructure.
The core determinant of the company’s success lies in its vision to enhance directional reference systems and consolidate data from multiple sensors (GNSS, airspeed, magnetometers, gyroscopes, accelerometers, etc.) to establish a comprehensive infrastructure. This vision provides a significant advantage in developing flight control algorithms.
Mersa Technology is a startup focused on precision agriculture solutions. Leveraging both its own resources and collaborative efforts, the company is developing drones that perform agricultural tasks such as spraying, seeding, and fertilizing more economically and efficiently.
FFCSv1 Autopilot
FFCSV1 is a product that comprises an open-source flight control software and hardware set designed for drones and other unmanned aerial vehicles. The project aims to provide drone developers with a flexible toolset to share technologies for creating customized solutions for various drone applications. FFCSV1 supports drone hardware and offers a software stack, enabling the scalable development and maintenance of the ecosystem’s hardware and software. This approach allows drone developers to work within an open-source framework and provides the opportunity to customize this technology. Furthermore, developers adhering to FFCSV1’s standards can ensure greater consistency and integration of hardware and software, thereby fostering more compatibility and collaboration within the drone industry. The primary goal of the FFCSV1 project is to foster information and technology sharing among developers working on drone technologies, thereby accelerating advancements and supporting standardization within the drone industry. Consequently, this initiative is envisioned to make the drone and unmanned aerial vehicle industry more cohesive, reliable, and innovative.
MODULAR ARCHITECTURE DESIGN
FFCSv1 is highly modular and extensible in terms of both hardware and software. It utilizes a port-based architecture. This means that when developers add components, it does not compromise the robustness or performance of the expanded system.
OPEN SOURCE CODE FEATURE
Some modules of the FFCSv1 product will be open to global development communities. This means that FFCSV1 will not only cater to the needs of a single company but will be preferred as a general toolset. Additionally, our goal is for this product to be widely used and accepted within the industry.
CONFIGURABILITY
The FFCSv1 product aims to provide optimized APIs and SDKs for developers working with different integrations. All modules will be independent, allowing for easy replacement with a different module without changing the core. Rearranging software features will also be straightforward
AUTONOMY
FFCSv1 is designed to deeply integrate several modules with embedded computer vision for autonomous capabilities. This design incorporates localization and obstacle detection algorithms, aiming to reduce obstacles and issues encountered by developers.
INTEROPERABILITY
FFCSv1 will offer an ecosystem supported by various devices that encompass not only robust flight software and hardware but also contemporary artificial intelligence software and hardware.
ROBUST SECURITY FEATURES
The top-tier safety features such as automatic fault detection algorithms, obstacle detection mode support, and regional geographic information systems will be inherently present in the code base by default. These features will be easily configurable and adjustable for custom systems.
LICENSING
FFCSv1 product will have licensed and unlicensed usage terms. This means the software will be allowed for proprietary use, allowing for modifications to optimize the software specifically for the company’s needs.
Artificial Intelligence
Many of us now know that autonomous vehicles are a research area, especially by large companies, and will be the standard transport method of the future. While 20 years ago autonomous vehicles seemed like science fiction, today they are becoming a part of life.
In fact, the first autonomous vehicles were conceived by inventor Francis Houdina in 1925, when he drove a radio-controlled vehicle (Image 1) through the streets of Manhattan. Later in 2004, Darpa Grand Challange, Google in 2009 and other companies such as Ford, Mercedes and BMW in 2013 are entering this sector.
What is Deep Learning?
Deep learning is a field of study that covers artificial neural networks and similar machine learning algorithms that contain one or more hidden layers.
Autonomous Vehicles Using Deep Learning
With technological developments in both hardware and software, we can now receive data from every possible source thanks to LIDAR sensors, cameras, GPS, ultrasonic sensors, etc.
Autonomous vehicles consist of 5 basic lines:
Localisation
Localisation is, in short, the autonomous vehicle knowing its own position. The position of the vehicle is calculated using a technique called Kalman Filters with high precision with the data from the sensors I mentioned above.
The main purpose of this line is as follows:
In autonomous vehicles, GPS data with a margin of error between 1-10 metres can lead to fatal consequences. Therefore, the error that may occur is minimised by using data from sensors such as GPS and LIDAR together.
How exactly is the position of the vehicle determined?
Perception
Perception is how cars perceive their environment. This is where computer vision and neural networks come into play. Studies such as object recognition and object location are currently being developed using deep learning algorithms.
Forecast
In the prediction part, the vehicles predict the behaviour of the vehicles or people around them. It predicts which direction a vehicle will go, how much its speed is. In this way, the autonomous vehicle can react in advance to a different event that may occur. It is realised with recurrent neural networks (RNN).
Recurrent Neural Network (RNN) – Convolutional neural networks (CNNs) process the information in a given image frame independently of what they have learnt from previous frames. However, the RNN structure supports memory in such a way that past detections can be utilised when calculating future predictions. Therefore, RNNs offer a natural way to predict the next step.
Road Planning
The route that the autonomous vehicle will follow is called the planned route. It is realised by search algorithms (A*), Lattice Planning and Reinforcement Learning.
Control
In the control part, the steering direction, speed and brake status of the vehicle are adjusted. The most commonly used method is PID (Proportional Integral Derivative) control. The steering control acts according to the information from the lane tracking system. The result from the lane keeping system is also transferred to a different neural network to optimise driving.