Giter Site home page Giter Site logo

jdoe95 / mrtos Goto Github PK

View Code? Open in Web Editor NEW
4.0 1.0 3.0 87 KB

Small footprint Real Time Operating System

License: MIT License

C 100.00%
rtos priority scheduler real-time multithreading preemptive microcontroller arm-cortex dynamic-memory-allocation queue

mrtos's Introduction

Microcontroller Real Time Operating System

This operating system is designed for SINGLE CORE microcontrollers and uses the fixed-priority preemptive scheduling algorithm. Stated briefly as follows,

  • The scheduler (tries to) makes sure that it runs the highest priority ready thread.
  • A higher priority thread will always preempt one with a lower priority outside a critical section.
  • If multiple highest-priority threads with the same priority exist, the CPU time will be shared between them.
  • If a higher priority thread is waiting for a resource that is currently unavailable, the operating system will put it to sleep and run other threads until the resource becomes available or specified time-out has been reached. In both cases the thread will be readied and a reschedule will be triggered immediately.

This RTOS has not been fully evaluated for performance and reliability. It is licensed under MIT license which is free for commercial and open/private uses and does not require source code disclosure (under certain conditions stated in LICENSE). Before planning to use it, please see LICENSE.

If you believe you have found a bug, please consider reporting it in ISSUES so that a fix can be worked out and the software can be made more robust, although this is not required. If you decided to, include the following information as much as possible, after making sure to protect your private code and information:

  1. How to replicate the bug/how it is discovered
  2. Version affected (tag, git-describe, or commit ID)
  3. Version or versions known to be good (tag, git-describe, or commit ID)

Features

Multithreading

  1. Dynamic thread creation and deletion
  2. Static thread creation and deletion using existing buffer (as RTOS module)
  3. Thread suspend/resume
  4. Dynamic priority
  5. Sleeping
  6. Yielding
  7. Critical section

Dynamic memory

  1. Dynamic memory allocation/deallocation using Next Fit
  2. Block/Pool/Thread memory statistics

Inter-process communication

Queue

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer (as RTOS module)
  3. Buffer reset (clear all data)
  4. Peek/Nonblocking peek (reading data without affecting the queue )
  5. Receive/Nonblocking receive (reading data)
  6. Send/Nonblocking send
  7. Send ahead/Nonblocking send ahead (sending high priority messages)

Mutex (recursive)

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer (as RTOS module)
  3. Peek mutex without affecting the lock status
  4. Lock/Nonblocking lock
  5. Unlock
  6. Lock status

Semaphore (multi-valued)

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer (as RTOS module)
  3. Counter reset
  4. Post operation
  5. Wait/Nonblocking wait operations
  6. Peek/Nonbloking peek without affecting semaphore status

How to port

Using supported platforms

If you are using one of the following platforms, clone the corresponding repository into a folder called mrtos-portable, under the same directory as mrtos, using the command

git clone <repository> mrtos-portable

Don't forget to add path mrtos-portable to the compiler search path -Imrtos-portable so that mRTOS can find the portable header file.

Supported platforms:

Porting to unsupported platforms

Clone the mrtos source code under your project using

[email protected]:jdoe95/mrtos.git

Create another folder under the project named mrtos-portable, and then create the following files under that folder

rtos_portable.h
rtos_portable.c

where rtos_portable.h should contain the configurations and function prototypes and rtos_portable.c will contain function bodies. It is important that you add the path to folder mrtos_portable to the compiler include directory '-Imrtos_portable', so that the RTOS source code can find the portable header.

Define the following macros in rtos_portable.h

  • OSPORT_BYTE_T the byte type of the platform, usually uint8_t in stdint.h

  • OSPORT_UINT_T an unsigned integer type, used for array indexing, timestamp, and priority. Usually uint16_t or uint32_t in stdint.h

  • OSPORT_UINTPTR_T an unsigned integer type, guaranteed to be able to hold an address, used for os_handle_t, usually uintptr_t in stdint.h

  • OSPORT_BOOL_T a boolean type, usually bool in stdbool.h

  • OSPORT_IDLE_STACK_SIZE the stack size of the idle thread, usually the size of exactly one full stack frame.

  • OSPORT_NUM_PRIOS number of prorities. Use as few as possible. Usually 8. Prioirty 0 will be the highest priority and 6 will be the lowest priority. Priority 7 will be reserved for the idle thread. Do not create anything on this priority.

  • OSPORT_MEM_ALIGN the largest memory pool alignment requirement. On some platforms, it is required for certain data types to be aligned to a certain memory address, and unaligned access can generate faults in the CPU or cause performance issues. For example, some platforms require that 8-byte data to be aligned to a 1-byte address boundary, 16-bit data and 32-bit data to be aligned to a 4-byte boundary. For this case, the value will be 4, because it will be the largest alignment requirement. The memory pool is also used to allocate process stacks.

  • OSPORT_MEM_SMALLEST The smallest memory (number of bytes) allocated to a thread at a time. To minimize fragmentation, the OS will always allocate more memory than this value to a thread.

  • OSPORT_ENABLE_DEBUG Use 1 to enable the assertion macros. If you believe there's a bug in the operating system, turn this on to allow the OS to capture the bug before it causes a chain of errors.

  • OSPORT_IDLE_FUNC The function name of the idle function. It will be created as an idle thread. On most platforms this is simply a function that executes an empty, dead loop. Sometimes, it is desirable to put the CPU to sleep in the IDLE function, done by using platform-dependent methods.

  • OSPORT_START() The function that clears the main stack context and sets up the CPU in a certain mode and loads the first thread.

  • OSPORT_INIT_STACK() The function that initializes the process stack so that it contain the initial stack frame (called a fake context) to be loaded onto the CPU.

  • OSPORT_BREAKPOINT() The breakpoint function that halts the debugger. Only used when OSPORT_ENABLE_DEBUG. If not defined, it will be replaced by a dead loop.

  • OSPORT_DISABLE_INT() The function that disables the interrupt.

  • OSPORT_ENABLE_INT() The function that enables the interrupt.

  • OSPORT_CONTEXTSW_REQ() The function that generates a context switch request. Usually the context switcher is implemented as the lowest priority interrupt.

In rtos_portable.c you should have

  1. Functions declared in rtos_portable.h
  2. A context switcher implemented as the lowest priority interrupt.
  3. A timer interrupt that calls os_handle_heartbeat() periodically after the operating system starts.

mrtos's People

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

mrtos's Issues

Changing scheduler queue implementation

https://github.com/jdoe95/rtos/blob/284e2765ec94535a798d5837dc8fa1bdba99def5/rtos/include/scheduler.h#L84

Problem description

It is originally proposed that when blocked for resource, sch_citem_t.p_schinfo will point to a struct containing a queue item, either FIFO or priority, that allows the task control block to be registered in the waiting queue of the resource. However, doing this will not allow os_thread_delete or similar functions to discover the item and safely remove it from the waiting queue should the blocked thread be deleted. Thus, for the sake of os_thread_delete, the resource queue item should be visible in the thread control block. On the other hand, it is discovered that sch_citem_t.sch_item can potentially be reused as a resource queue item, since it is basically not used by the scheduler when blocked (inserted into sch_cblk_t.q_block and left alone until readied). The following two solutions are shown to exploit this instead of introducing a resource item.

Solution 1

The type of item this is (FIFO or priority) depends on the resource. This solution offers two types of queue items using the space of one.

/*
 * Scheduler control item
 */
struct sch_citem_s
{
	union /* scheduling item */
	{
		struct sch_fifoq_item_s sch_item_fifo; /* FIFO scheduling item */
		struct sch_prioq_item_s sch_item_prio; /* priority scheduling item */
	} sch_item;

	struct sch_prioq_item_s delay_item; /* delay item */
	volatile os_counter_t prio; /* item priority */
	volatile os_sched_state_t state; /* item state */
	void *volatile p_container; /* container */
	void *volatile p_schinfo; /* scheduling info block */
};

Depending on its need, the resource can then choose to push either scheduling item onto its queue. However, doing this increases the chances of potential misuse of the wrong type of queue function and also, it is hard to figure out a way to make the initialization of sch_item less misleading. Moreover, if the members in sch_item_fifo and sch_item_prio are not arranged the same way, key data can be overwritten.

Solution 2

Instead of introducing yet another queue item, The focus is to make sch_citem_t.sch_item to support both priority and FIFO queue operations. To do this, FIFO queue items and priority queue items are no longer differentiated, instead, they are morphed into a single item, the memory usage of which is the same as Solution 1.

/*
 * Scheduler queue item
 * Order of members makes a difference.
 */
struct sch_qitem_s
{
	struct sch_qitem_s *volatile p_prev; /* previous item */
	struct sch_qitem_s *volatile p_next; /* next item */
	struct sch_citem_s *volatile p_container; /* container */
	void *volatile p_q; /* mother queue */
	volatile os_counter_t tag; /* tag value for ordering */
};

The two queue headers are then defined separately so that the compiler can check if the right queue function is used on the queue type.

/*
 * Scheduler FIFO queue header
 */
struct sch_fifoq_s
{
	struct sch_qitem_s *volatile p_first; /* first item */
};

/*
 * Scheduler priority queue header
 */
struct sch_prioq_s
{
	struct sch_qitem_s *volatile p_first; /* first item */
};

void sch_fifoq_??( struct sch_fifoq_s *p_q, ...);
void sch_prioq_??( struct sch_prioq_s *p_q, ... );

The second solution works the same as the first but is easier to implement, debug, and understand.

Dynamic thread memory leakage

Memory not released after dynamic thread exits.

Replication:

  • Call os_memory_get_pool_info to obtain pool size: 27468
  • Create a thread using os_thread_create
  • Thread may or may not call os_memory_allocate and os_memory_free
  • Thread exits
  • Call os_memory_get_pool_info again to obtain pool size: 26352
static void user_thread(void)
{
	ushell_printf(TTY_WARN "Dynamic thread started.\n\r" );
	for(unsigned i = 0; i < 10; i++ )
	{
		ushell_printf(TTY_INFO "Hello %u\n\r", i);
		os_thread_delay(100);
	}

	ushell_printf(TTY_WARN "Dynamic thread exiting...\n\r" );
}

void cmd_create_thread( char *argv[], int argc )
{
	(void)argv;
	(void)argc;
	ushell_printf(TTY_INFO "running os_thread_create...\n\r" );
	os_thread_create(2, 1024, user_thread);
}
ushell version 0.2
> 
help
info
lscmd
exit
args
demo
poolinfo
> poolinfo
num blocks in pool 1
pool size 27468

> demo
[INFO]  running os_thread_create...

> [WARN]  Dynamic thread started.
[INFO]  Hello 0
[INFO]  Hello 1
[INFO]  Hello 2
[INFO]  Hello 3
[INFO]  Hello 4
[INFO]  Hello 5
[INFO]  Hello 6
[INFO]  Hello 7
[INFO]  Hello 8
[INFO]  Hello 9
[WARN]  Dynamic thread exiting...

> poolinfo
num blocks in pool 1
pool size 26352

> 

Queue does not unblock threads

When calling one of the following functions

os_queue_send();
os_queue_send_nb();
os_queue_send_ahead();
os_queue_send_ahead_nb();
os_queue_receive();
os_queue_receive_nb();

if there are threads blocked on the queue, the threads do not get unblocked when data is ready. This is because the following function call is missing from these functions

queue_unlock_threads(p_q, &g_sch);

Thread running when state == BLOCKED

When calling a blocking function in a critical section, the context switch will not get serviced until the lock is released. If a user application mistakenly calls a blocking function inside a critical section, the blocking function will return false when trying to block because it was never able to yield the context and thus the requested resource will never get serviced. However, because the thread is still in the resource's waiting list, the scheduler will still try to access the scheduling info but it is already released because the blocking function has returned. This will result in memory corruption. See example below

// assume the semaphore is 0 
os_enter_critical();
// schinfo block allocated on the stack by os_semaphore_wait
b_result = os_semaphore_wait( h_sem, 1000); 
// thread joined wait queue, requested yield, but not serviced
// schinfo released, the function returns false
os_exit_critical();
// context switch happens here, during this time the scheduler attempts to fetch
// schinfo to allocate resource, but it was already released because os_semaphore_wait
// returned.

Currently sleeping with a lock held is not allowed in the operating system because this will break the lock. The purpose of the lock is that no other context can gain access to the CPU and thus the shared resource. A possible solution to this is to allow sleeping with lock. This will enable interrupts immediately before generating the context switch request and disable the interrupt as soon as the context resumes. This will require each thread to keep a copy of the critical nesting counter when yielding, and will still need the global critical nesting counter for interrupts. The global critical nesting counter will also need to be made visible to the scheduler. The user must ensure shared modifications does not get interrupted by the sleep function.

Provisional features

Thread
Multi-threading

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Suspend/resume
  4. Get/Update priority
  5. Delay
  6. Yield
  7. Critical section

Memory
Efficient memory usage

  1. Dynamic allocation and deallocation

Queue
Inter-process communication

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Reset queue (clear all data)
  4. Peek push, non-blocking push and push with timeout
  5. Peek pop, non-blocking pop and pop with timeout

Semaphore
Inter-process synchronization

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Post
  4. Peek wait, non-blocking wait, and wait with timeout

Mutex
Inter-process synchronization

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Peek lock, non-blocking lock, and lock with timeout
  4. Peek recursive lock, non-blocking recursive lock and recursive lock with timeout

Signal

Inter-process, process-interrupt communication

  1. Dynamic creation and deletion
  2. Static creation and deletion using existing buffer
  3. Wait with info struct and timeout
  4. Send

Removing dynamic memory features

Planned removal of dynamic memory features because they are intrinsically incompatible with memory protection units (a safety feature on many MCUs).

It is also determined that in reality most of the operating system object are created statically at compile time. Although the RTOS supports creating static objects, it should be made to be the primary way of doing things.

Dynamic memory may be provided as a feature completely separately from the RTOS. Allocation of memory from a specific memory region should be supported because most memory protection units only supports a limited number of regions. Moving dynamic memory out of RTOS would mean that it no longer performs garbage collection, that is, if a thread quits or objects get destroyed, the operating system will not try to reclaim the memory because it is completely unaware.

Queue implementation: cannot send larger data than queue buffer

If the data is larger than the queue buffer when sending to a queue, the send function will always fail. It might be desirable to change the queue implementation so that it allows data to be sent in smaller blocks as soon as the queue has free space.

Casting ``struct lstitem_s`` from other list element types is unsafe

struct lstitem_s
{
	struct lstitem_s *volatile p_prev; /* previous item */
	struct lstitem_s *volatile p_next; /* next item     */
};

struct mblk_s
{
	struct mblk_s *volatile p_prev; /* previous block */
	struct mblk_s *volatile p_next; /* next block     */
	volatile uint_t size;           /* block size     */
	struct mlst_s *volatile p_mlst; /* parent list    */
};

The RTOS reuses its linked list code by defining insertion/removal methods for struct lstitem_s and then performs casts to other linked list element types like mblk_s to reuse some of the methods. Both lstitem_s and mblk_s have two pointers at the beginning of the struct. This however, may not always work, because no assumptions can be made on the internal layout of the structs even though they are defined very similarly.

There is something in the C standards called a 'common initial sequence', where multiple structs of different types with a common initial sequence when defined as members of a union share the same memory layout in their common part. However, in this case, even though the common part are both pointers, the pointers are to different types, so they are not generally considered a 'common initial sequence'.

It is unlikely that the compiler will generate different memory layouts for the initial part of these structs, but there is no guarantee that it wouldn't because this type of conversion is not backed by the language standards.

Queue peek with no data transfer

Sometimes it is desirable to peek at the queue to wait for it to reach a certain fill level. No data transfer is needed in this case. Currently the queue doesn't support peeking with a NULL destination buffer.

The following features can be added:

os_queue_peek(h_q, NULL, 512, 50);
os_queue_peek_nb(h_q, NULL, 512);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.