Thesis Pte Ltd

icon

One thesis

Copyright © 

2025

 Thesis Pte. Ltd. All Rights Reserved

OFFICE

34 Boon Leat Terrace #04-09A Singapore 119866

GET IN TOUCH

One thesis

Copyright © 

2025

 Thesis Pte. Ltd. All Rights Reserved

Categories
News Tech bites

We need more USB ports

 

In the pursuit of ever-slimmer laptops, manufacturers are removing what appears to be “bulky” USB ports from their new product releases. That may result in a slimmer notebook, but more often than not, individuals end up adding dongles and further bulk so that existing peripherals can still be used.

Mobile developers on our team use the Microsoft Surface Pro 3, and it has been a great laptop for mobile development since most of our software development kits (SDK) and integrated development environment (IDE) run on Windows. It’s compact, feels good, is light (only 800g!), has a touchscreen, and sports a beefy Intel Core i7-4650U processor. It’s perfect. Except for one thing.

It has only one USB port.

Although the single USB port is not yet a widespread design adoption with Windows-based laptop manufacturers, the net was abuzz with gripes surrounding Apple’s design of a single USB-C port on its new range of MacBooks. CNET published an article on some “survival tips”, but it doesn’t hide the fact that sometimes we do need more ports on our computing devices – development kits, Universal asynchronous receiver/transmitters (UART), USB-to-Serial, Bluetooth4.1 dongles, syncing our smartphones and other memory devices. You name it.

A USB hub is an obvious solution. But somehow none of the USB hubs we’ve tried has had the right combination of data-transfer reliability and aesthetics. Cheap hubs with the right profile keep disconnecting our devices, while reliable hubs are expensive.

Is it too hard to ask for a few more USB ports? So we made our own.

Selection of a hub-controller chip is easy, as just about practically every semiconductor manufacturer has a product line of USB hub controllers. Texas Instruments, STMicroelectronics, Cypress, Maxim, Renesas, so on and so forth. Each manufacturer has slight peripheral advantages in its chips over its competitors.

However, a key goal was to keep costs low, which is a good design practice to determine the cost of your Bill of Materials (BOM) before jumping into the design. More often than not, engineers/inventors jump straight into a design and realize far too late that the costs of constructing that particular design reduce the value proposition of the invention or device they are trying to make with marginal improvements over existing solutions.

Consumers are more well-informed when it comes to the selection of products and devices pertaining to their technological tastes and needs, where cost and value are no doubt factors in a decision to purchase.

A quick cursory search revealed Chinese and Taiwanese semiconductor companies producing equivalent USB hub controller chips. Such as Alcor Micro, Genesys Logic, and JFD-IC, FE1.1s which are likely producing the bulk of the world’s original-equipment-manufacturer (OEM) USB-controller chips.

We found Genesys Logic’s GL850G controller chip in one of the hubs lying around the lab. Alcor Micro’s AU6256 and JFD-IC’s FE1.1s also presented attractive options.

As a designer, several factors should be considered.

  1. Does your manufacturer have access to the components that require assembly?
    If it does not, it’s likely that you will have to set up that relationship.
  2. Can you get the necessary components in the volume that you require?
    It’s unlikely that you’ll get 10 pieces from an OEM manufacturer if you cannot commit to a Minimum Order Quantity (MOQ) or sales volume, otherwise you might have to get a more readily available component from a distributor like Future Electronics, Arrow Electronics, Avnet, Digikey, Element14 or Mouser.
  3. Is there a commercial relationship between you and the component provider?
    More often than not, OEM manufacturers from China, Taiwan or South Korea converse in their native languages and cross-border sales teams will face challenges in currency conversions, customs restrictions and/or taxes, and the offset of these hidden costs may render the component more expensive to implement in your design than originally anticipated.

Availability, MOQ, documentation, and cost-per-chip are all factors an engineer has to be aware of during the process of a design. In the end, we selected Standard Microsystems Corporation (SMSC)’s USB2514 USB 2.0 Hi-Speed Hub Controller. SMSC is now owned by Microchip, which means the chip is marketed by a reliable and reputable OEM semiconductor company with established distribution channels.

The USB2514 had a good price of US$1.40 a chip, comprehensive documentation, very minimal BOM and, most importantly, is widely available and accessible to electronic contract manufacturers and assemblers.

Here is our design and the final outcome!

Top view (male USB headers were soldered on later)

Male headers were soldered, and heatsinked for protection and then distributed to our engineers

Close up
In our opinion, it looks more badass-punkish when its bare electronic guts are exposed. Future design iterations would involve changing the hub to a USB-3.0 controller and possibly include other mechanical features or enclosures; for now, this design works fine and is a talking point when our mobile engineers assist clients on-site. Check out our USB-C “spacedock” design for advanced features that were later added. If you’re interested in making such of a USB hub of your own, we would love to work with you.
Build the future!

Categories
News Tech bites

Intro to Eddystone – Google latest proximity beacon protocol

During the height of iBeacon, google released a similar and more powerful proximity beacon protocol called EddystoneTM in July 2015. The protocol was designed to be platform agnostic and more flexible in the data that can be broadcast. The key features were designed to make it a better platform than the iBeacon protocol. App developers can find more information on the available API here.

However flexible google wishes to design the EddystoneTM to be, they have to keep in mind the space limit of a BLE broadcasting data. Instead of trying to cram all sort of data within the data limit, google overcame the problem by introducing “frames”. A total of 3 different data frames was designed into the EddystoneTM protocol- namely UID, ULR and TLM. Each frame was designed to carry different set of information to fulfill the broadcasting need of most proximity beacon.

UID

The most basic of the EddystoneTM protocol with frame specification that is very similar to the iBeacon’s. In this frame, user will have to indicate the unique ID of the device, the instance ID and the transmission power.  The unique ID is also known as namespace, which is a 10bytes data. User can generate it using an online UUID generator. The instance ID works the same as the Major and Minor ID from iBeacon. Finally, the transmission power, aka ranging data. To calibrate this – “Tx power is the received power at 0 meters measured in dBm. The value may range from -100 dBm to +20 dBm at a resolution of 1 dBm (source)

Note to developers: the best way to determine the precise value to put into this field is to measure the actual output of your beacon from 1 meter away and then add 41 dBm to that. 41dBm is the signal loss that occurs over 1 meter.”

Unlike the iBeacon’s transmission power which was supposed to be measured at 1m range, EddystoneTM requires the transmission power to be measured at 0m instead. However, noting from the calibration information provided by google, the transmission is supposed to be measured at 1m range and you add 41dBm to account for the signal loss over 1 meter. The result is the transmission power at 0m.

URL

This frame is designed to house an URL for your proximity beacon to broadcast. Do make sure to indicate “0x10” in the Frame Type byte, else the EddystoneTM APP API will not recognize the frame properly. Following that is the transmission power, which can be calibrated with the same procedure provided in the above section. The URL scheme indicate which html prefix to be appended to the decoded URL and currently, there are 4 options to choose from:

Finally, it’s the encoded URL. For example, if I wish to broadcast “onethesis.com”, my encoded URL will looks like “0x6f6e6574686573697300” (with padding: onethesis = 6f 6e 65 74 68 65 73 69 73), note that the data is entirely in hexidecimal). The string is ended with the URL with “0x00” which will be decoded as “.com/”. A full list of the ending character is listed below:

TLM

This frame is also known as the telemetry frame. It holds device specific information which could be useful for debugging or data-logging purpose. To start with, user must indicates “0x20” in the Frame type and followed by the TLM version. The subsequent data are device specific information like battery voltage, Beacon temperature, Advertising PDU count (ADV_CNT) and Time since power-on or reboot (SEC_CNT).

Battery voltage is in 1mV per bit resolution. Beacon temperature is in degrees Celsius expressed in 8.8 fixed point representation. ADV_CNT counts the number of advertising frame that has been sent since power-on or reboot. SEC_CNT is a 0.1 second resolution counter. It is not necessary to provide all of the data. User can choose to obscure certain data by simply not updating the values.

One important note about this frame is that it cannot be a standalone frame in the EddystoneTM protocol. It should always be used in conjunction with the UID or URL frame.

Resources

As usual, for firmware developer, Cypress Ez-BLE pioneer kit provides easy access to learning this protocol and Cypress has provided some examples for their users as well (source).

Build the Future

Categories
News Tech bites

Understanding Ethernet Cables

It binds us. It connects us. It gives us knowledge. It’s omnipresent and omniscient. It’s… Wi-Fi.

Well.  Sometimes we forget that Wi-Fi signals and connectivity still require a much underappreciated and most needed component –


The Ethernet cable.

Your mobile connectivity has to come from some sort of wireless hub, which is connected to either an Ethernet or fibre-optic cable these days. For this post, we’re looking at the standard Ethernet cable used for VoIP phones, routers, switches, hubs, servers, computers, network printers and more. To the casual user, these cables come in a confusing array of choices.

However, there is, in fact, a difference between all those network cables. They look and any of them will plug into an Ethernet port, but they do have some differences on the inside.

So let’s say you need to get some new cable to setup a small home network, and you walk into a gadget store and say “Hi, I’d like to get some computer cable”.

That’s not going to be helpful. The differences between each type of cable are due to various network standards. Here’s what you need to know about how they’ll affect the speed of your home or work network:

Cat5 – Oldest type and slowest – Category 5 cabling, also known as Cat5, was made to support theoretical speeds of 10-100Mbps (10BASE-T, 100BASE-TX). Since Cat5 is an older type of cabling, you probably won’t see it much in stores, but you may have gotten some with an older router or another networking device. Most Cat5 cables are unshielded, relying on the balanced line twisted pair design and differential signaling for noise rejection. Cat5 has been superseded by the CAT5e (enhanced) specification.

Cat5e – Faster with less interference – Category 5 enhanced cabling is an improvement on Cat5 cabling. It was made to support up to 1000 Mbps gigabit speeds, so in theory, it’s faster than Cat5. It also cuts down on crosstalk, which is the interference you can sometimes get between wires inside the cable. Both of these improvements mean you’re more likely to get a faster, more reliable speed out of Cat5e cabling compared with Cat5. This is the most common cable that is in use today.

More details of a Cat5e cable’s specifications can found here.

Cat6 – Consistent gigabit speeds – Category 6 cabling is the next step up from Cat5e, and includes a few more improvements. It has even stricter specifications when it comes to interference, and is even capable of 10-Gigabit speeds.

It’s used in large networks, small data centers and in a business environment. You probably won’t use these speeds at home, and the extra interference improvements won’t make a huge difference in regular usage, but if you’re purchasing a new cable, you might want to consider Cat6 for future upgradability.

One thing to note with extra shielding, Cat6 cables are slightly thicker and slightly less pliant, meaning they won’t bend around corners or coil as easily as compared with Cat5 cables.

Cat 6a (Augmented) – Server-type cables – These cables are the fastest and most expensive cables available for the highest consistent speeds in server farms, network servers and distributed / parallel computing applications.

All cables are backward compatible, meaning the higher categories can work with the lower categories of Ethernet. An interesting thing to note is that there are other categories of cable, Cat7-8, but they are not recognized by the TIA/EIA and few manufacturers make them.

So Which Should You Use?

It’s important to note that your network speed is different than your internet speed. Chances are, upgrading your cables isn’t going to make a difference in how fast you load that YouTube video. Your ISP speeds are likely to be much slower than your network. But if you are transferring files between computers (i.e. you’re backing up to a NAS) or streaming videos, then using gigabit-compatible hardware will speed up access time, but your router and computer ports should be gigabit-compatible as well, otherwise, the bottle neck is just shifted elsewhere. Also, if you’re running cable throughout your house, you may notice a decrease in speeds if you are using cables longer than 100 meters.

What type of cable are you using?

The printed text on the cable will usually give you some clues. In this example, the similar color arrangement of the cable tells us that the cable is a straight cable.

Straight and crossover cables are wired differently from each other. One way to tell what you have is to look at the order of the coloured wires inside the plastic RJ-45 housing. The RJ-45 connector is the standard type connector used for Ethernet connections compared with the smaller RJ-11 connections used on telephone cords. Crossover cables are less common and are used in computer-to-computer applications. The more common cable will be a straight cable, where you connect a network device (say a router) to your computer or wireless-access points.

What you can do is put both ends of the cable side by side with the connector facing the same way and see if the order of the wires is the same on both ends. If so, then you have a straight cable. If not, then it’s most likely a crossover cable or was wired wrong.

Our cable reads “CAT5E” “TIA/EIA568B” “4STP” “24AWG”, what it means:

  • Cat5e Category 5 Enhanced cable
  • TIA/EIA568B means this cable is compliant to the standards set by Telecommunications Industry Association (TIA) of the Electronic Industries Alliance (EIA) for commercial building cabling, pin/pair assignments for eight-conductor 100-ohm balanced twisted pair cabling. These assignments are named T568B, the latest version is T568C as of 2014, more of this standard here.
  • 4STP means four cables of S = braided shielding (outer layer only), TP = twisted pair. This is a four-wire shielded twisted pair (STP) of wires, more can be read on Ethernet cable construction here.
  • 24AWG denotes how thick the copper wires of the cable are. American wire gauge (AWG) is a standardized gauge system for indicating the diameter of conducting electrical wire and is commonly used among electrical engineers. The rating gives information regarding the resistance of the wire and the allowable current (ampacity) based on plastic insulation. In this case, 24 represents as cross-sectional diameter of 0.51054mm (0.205 mm2) and a copper resistance of 84.22mΩ/m (0.842Ω/metre), which is a typical impedance of 100 ± 15Ω.

Cable health

Ok, now you know a whole lot more about Ethernet cables, you fish out one from the rat’s nest where you’ve stored all the random cables in your house, you find the Ethernet cable and plug it in.

It doesn’t work.

Great, what now? How do you tell if it’s a bad cable?

Wouldn’t it be great to know if the wires inside the cable had broken? Especially if it’s an expensive Cat6 type cable or if the cable is especially long.

We got an RJ-45 network cable tester from Amazon, however, this little gadget isn’t intuitive at all. According to the poorly written manual, the lights will turn on sequentially from 1 to 8 if the tested cable if fully functional. It works sporadically and basically, gives an “OK/NOT OK” binary output by means of a single LED.

Next, we got ourselves RJ-45 punch-down blocks from Amazon, and they were a pain. The clips required pliers to remove and more wires needed to be punched in the teeth before any sort of testing could be done. A real inconvenience.

There must be a better way – so we created a new solution. The easy-to-use RJ-45 Ethernet cable tester for a quick, low-cost, painless way to test your Ethernet cables.

This is as simple as it gets – two RJ-45 receptacles. Plug in your Ethernet cable and use a continuity checker or a Multimeter and you will know if your cable works or is broken.

You can now test the resistances of each point in your cable using your Multi-meter and determine if your cable meets the necessary tolerances. Typical Cat5e cables have a 10ohm/100m conductor resistance, resulting in a 1ohm/10m loop resistance, with tolerances included, you should measure an end-to-end wire resistance of <20 ohms per 100m length, or about 0.2Ω/metre.

Or you could simply do a continuity test to ensure your cables aren’t broken or if there are missing wires. We tested all our Cat5/Cat5e cables, flat, shielded, long and short, in seconds! This nifty gadget was born out of frustration in finding a simple, low-cost means of testing lots of Ethernet cables quickly and easily and our engineers now use it frequently. It’s a must-have for any engineer/technician working with lots of Ethernet cables.

If you liked this post, subscribe with your email below, and we’d love to hear from you, also we’ll be happy to hear suggestions and comments for improvement in the comments section below.

Build the future.

Categories
News Tech bites

Bluetooth Low Energy – iBeacon basics

iBeacon is a protocol developed by Apple Inc. in 2013. In short, iBeacon technology allows mobile applications to understand their position on a micro-local scale, and deliver location-specific contextual content to users based on their proximity. The underlying communication technology is Bluetooth Low Energy (BLE). The protocol takes advantage of the BLE technology to bring about very low-powered broadcasting devices that could alert users of its presence. By adding certain parameters to the advertising data packet, the broadcasting device can be used to transmit location specific information. The most common use case of such iBeacon devices would be in the brick and mortar shops, where business owners want to transmit location-specific promotions or to track their shoppers who have their shop-specific app installed on their smartphones. Of course, privacy is always a running concern for the user tracking implementation but with the latest mobile OS, users can prevent their privacy from being breached. The caveat is that the users have to be tech-savvy enough to know how to do that.

Several development projects that we’ve been involved in are requests for iBeacon-like attributes – implementing similar protocol but with varying levels of modifications. The good news is that you can implement the iBeacon on your own. There will be another article released soon, where we talk about EddystoneTM – Google’s latest beacon protocol. Stay tuned for that.

For those really curious about the type of projects we have done using iBeacon and other Beacon protocols, I will just say that there are various combinations but mostly it boils down to users’ behavioural monitoring based on location. We focused on product development, so our clients could remain focused on their customer’s buy-in and marketing.

iBeacon advertising protocol

iBeacon is a one-way transmitting protocol. You have to set the device to be non-connectable and always stay in advertising mode. The reason for non-connectable should be a no-brainer because if you are connected, BLE will not advertise. To save power, you will want to broadcast in a longer time interval. You can try a 100ms broadcasting interval and slowly fine-tune it from there.

ibeacon-bytes

The above illustration describes the iBeacon protocol in the simplest way possible. All BLE SDKs should have provision to modify the advertising data, so make sure to arrange your advertising data as per the protocol or the iBeacon will not work. To officially work with iBeacon, you will need to license the technology from Apple. You can read Apple’s beginner document for more information (source).

Moving on to the more technical aspect.

The 4 most important components of this protocol are:

1. Proximity UUID
2. Major
3. Minor
4. Measured power

Proximity UUID

This is the unique identifier that separates your device from the hundreds or thousands of iBeacon devices out there. Your app will have to white-list your UUIDs in order to prevent reading from the wrong device. By whitelisting, I mean your app should only read from devices with your authorized UUID. There are multiple avenues to generate UUID but one of the most common sources can be found here.

If you are just testing out iBeacon using the AirLocate sample code located on the Apple Developer site (https://developer.apple.com/ibeacon/), you will need to use their predefined UUIDs

1. E2C56DB5-DFFB-48D2-B060-D0F5A71096E0

2. 5A4BCFCE-174E-4BAC-A814-092E77F6B7E5

3. 74278BDA-B644-4520-8F0C-720EAF059935

4. 112ebb9d-b8c9-4abd-9eb3-43578bf86a41

5. 22a17b43-552a-4482-865f-597d4c10bacc

6. 33d8e127-4e58-485b-bee7-266526d8ecb2

7. 44f506a4-b778-4c4e-8522-157aac0efabd

8. 552452fe-f374-47c4-bfad-9ea4165e1bd9

Major/Minor

To define the location of the device, you declare the major and minor values. An example usage would be in a multi-storied departmental store, you will use major to define the level and minor to define the particular section within the level. Since it is a 2 bytes parameter for each value, you can define up to 65536 unique major locations and 65536 unique minor locations. If you do the sum, that’s a lot of unique locations.

Measured power

Another aspect of the iBeacon is that it can tell how far the user is away from the iBeacon device. However, to achieve that, you need to calibrate the device. The way to calibrate is fairly straightforward. You need a mobile app that can measure RSSI, and you use that to measure the RSSI at a 1m distance from your device. I would also recommend you to do the calibration in multiple directions and compute the average as the measured power. Bear in mind that the measurement is in dBm.

The best time to calibrate is on actual deployment. Laboratory or factory calibration will usually net the best results but it doesn’t reflect your actual environment as there are many factors to consider when it comes to RF, e.g. Fresnel zone, Dead zone and etc…

Surprise

A little surprise to some but you can actually indicate the device’s battery life in the advertising protocol. It is a little more advanced so I wouldn’t recommend it to beginners. The reason why you can do that is that Apple protocol does not use up all of the available advertising bytes for BLE; you can squeeze in 1 more byte right after measured power. However, since it’s only 0-15, you will need to include custom firmware code within your device to digitize the battery level into 16 levels.

Go forth and code away!

Categories
News Tech bites

BLE introduction: Notify or Indicate

What are the differences between Notify and Indicate?

The short answer is that they both basically do the same thing, except Indicate requires acknowledgement, while Notify does not. For the longer answer, please read the rest of my post.

By default, you cannot push data to your remote client whenever your Bluetooth low energy (BLE) device has new data to publish. If you read my previous post, you will know that you need to enable the proper permission and include a “Client Characteristic Configuration Descriptor” (CCCD) for that to happen. Not forgetting, your remote client must subscribe to the attribute via the CCCD to receive the pushed data.

Typically, people push data when they want their remote client to asynchronously receive updates whenever their BLE device has new data. While you can perform a periodic polling, the method wouldn’t be very energy efficient and you will not get the quickest update (this depends on your polling rate). Also your BLE device might get updates in an aperiodic fashion, so periodic polling is an utter waste of energy. Furthermore, polling requires two-way communication and Notify is one way, so you will save on radio airtime which would lead to further energy reduction. However, because Notify is unacknowledged, it is unreliable since you will not know whether your remote client has received the data.

To rectify that, let me introduce the Indicate feature. It is almost the same as Notify, except it supports acknowledgment. Your remote client will have to send an acknowledgement if it has received the data. However, such reliability comes at the expense of speed since your BLE device will wait for the acknowledgement until it has timed out.

Conclusion

pro-con-notify-vs-indicate

For most use cases, I would recommend you start with Notify until you find out in your test environment that data is dropping out, then you move into Indicate. In any case, Indicate might take up additional airtime, but it shouldn’t drastically affect your use case. If the communication speed of Indicate is bordering your requirement, then perhaps you should be looking at alternative wireless technologies because BLE is not meant for high-speed communication.

Categories
News Tech bites

GATT it man

Hey folks, I am back with another article on Bluetooth Low Energy (BLE). In my previous article, I addressed some common misconceptions between classic Bluetooth and BLE. This time, I will be talking about one of the most commonly used profiles for a BLE device – the GATT, also known as the Generic Attribute Profile.
Before I begin, let’s start with a good ol’ history lesson. You may ask why. Simply because I love history, and I promise it’s a short one.
BLE was not always known as Bluetooth Low Energy. It was first conceived by Nokia in 2006 as Wibree. Then, it was touted to be a competitor to Bluetooth and there were articles on how Wibree might replace Bluetooth in some use cases. However, as we all know by now, that didn’t happen because Bluetooth SIG absorbed Wibree in 2010 into its current Bluetooth v4.0 specifications and BLE was born. As always, we have to thank Nokia for laying the foundation of many good wireless technologies that we have come to enjoy today. Nokia 3210, I will always love you for being my first… mobile phone.
With the history lesson behind us, let’s move on with the introduction of BLE profiles and protocols.
For BLE, you will use one of the following profiles or protocols:

  1. GATT-based profiles
  2. BR/EDR profiles
  3. BR/EDR protocols

I will focus on GATT-based profiles as they are  the most commonly used profiles in data-transfer applications.

GATT Profiles

When one looks deeper into the GATT-based profiles, one will quickly discover that Bluetooth SIG has defined many profiles for generic IoT usage. Of course, you can also design a custom profile if none of them suits your use case.  However, there are reasons that Bluetooth SIG has a predefined profile.
First, a predefined profile is kind of like a “plug and play” element in your design. Most BLE-chip software development kits (SDKs) would support some of such profiles and all you have to do is to enable them. This saves a lot of time for developers.
Second, the predefined profile uses the 16-bit universally unique identifier (UUID) because it has been approved by Bluetooth SIG. A custom profile needs to use a 128-bit UUID to prevent collision with an existing UUID (if any) and for future-proofing the design.
Having said so, a predefined profile will not be flexible when you need to transmit information that is not in the predefined list. When that happens, you will need a custom profile. At this junction, I know that some of you guys may have the tendency to “retrofit” a predefined profile for your use case, but I will not recommend that. It is better to define a custom one from scratch.

gattitman-pros-and-cons-table
Pros-and-cons summary table

GATT Structure

As much I wish to go into great detail on the deep, deep abyss of BLE, I will give only an overview to get you chaps started. Ok, maybe it’s not that deep of an abyss and maybe it’s a shiny piece of heaven to some of you chaps. I will give only an overview. For those who wish to learn more about BLE, do read this.
The best way to get started is to have a graphical representation of the GATT profile.

gatt-structure
Figure Credit.

When it comes to designing a profile, there are three attributes that must be taken care of – namely service, characteristic and descriptor. Each attribute is demarcated by a handler number which is usually enumerated automatically by your Bluetooth chip SDK. If your SDK does not take care of that, I will just say “have fun enumerating”. So to prevent making a mess out of your code, it’s better to design and reiterate before you jump into your code, you code monkeys.
One good note when it comes to designing a profile is that it’s not necessary for a characteristic to hold a descriptor, but it’s mandatory for a service to hold at least one characteristic.

Service

There are two types of services – primary and secondary. Usually, developers will use only the primary service because the secondary service can exist only under the primary service, which makes it a little tricky to use. Within a profile, you may find multiple primary services. For example, a standard heart-rate tracker will have the following services:

  • Device information service
  • Battery information service
  • Heart-rate service

Note that all of them are predefined services. You are encouraged to create your own service if none of the standard services fits your usage, modification to the service may yield unexpected results and it’s recommended that you write your own service characteristics.
So if you look at the above graphic, you can think of service as a container for all your characteristics. During device advertising, you would usually include the UUID of the key primary services so your remote client will discover the right device.

Characteristic

A characteristic is a data container for the information that your device wishes to transmit to your remote client. For example, a heart-rate service will have a Heart-Rate Measurement characteristic to hold the heart rate that your device has detected.
When declaring a characteristic, you will also determine the permission level for the characteristic:

  • Read
  • Write
  • Write with response
  • Signed write
  • Notify
  • Indicate
  • Writable Auxiliaries
  • Broadcast

The most common permissions would be read and write. One a side note, if you wish to push your data to your remote client, you will need to enable the notify permission. However, enabling the notify permission is not enough, you will also need to include the notify switch descriptor, also known as Client Characteristic Configuration Descriptor or CCCD.

Descriptor

As mentioned previously, it is not entirely necessary for a characteristic to be accompanied by a descriptor. A question that a reader might ask is, under what circumstance would you include a descriptor? That would be when you wish to describe a characteristic (like its name or function) or to include a notify switch. For the former, you would include a “Characteristic User Description” and for the latter, you would include a “Client Characteristic Configuration Descriptor”.

In Summary

In a nutshell, you use a predefined profile if the standard profile is good enough for you, and a custom one if you know what you are doing.
For a firmware designer, you have to take note that the BLE GATT profile is by no mean persistent. You will need some form of non-volatile memory to hold your default values or previously captured data and instantiate your GATT profile during each device restart or power up.
Now go forth and code away!
Build the future.