Giter Site home page Giter Site logo

foss-for-synopsys-dwc-arc-processors / toolchain Goto Github PK

View Code? Open in Web Editor NEW
87.0 40.0 48.0 44.88 MB

Repository containing releases of prebuilt GNU toolchains for DesignWare ARC Processors from Synopsys (available from 'releases' link below). The repository itself contains all the scripts required to build the GNU toolchain. Toolchain documentation available at https://foss-for-synopsys-dwc-arc-processors.github.io/toolchain . Processor Information available at

Home Page: http://www.synopsys.com/IP/ProcessorIP/ARCProcessors/Pages/default.aspx

License: GNU General Public License v3.0

Shell 61.50% NSIS 7.68% Perl 1.48% Batchfile 0.09% Python 5.94% Makefile 16.07% RPC 1.86% Ruby 5.38%
gnu-toolchain arc

toolchain's Introduction

ARC GNU Toolchain

This is the main Git repository for the ARC GNU toolchain. It contains documentation & various supplementary materials required for development, verification & releasing of pre-built toolchain artifacts.

Branches in this repository are:

  • arc-releases is the stable branch for the toolchain release. Head of this branch is a latest stable release. It is a branch recommended for most users
  • arc-dev is the development branch for the current toolchain release

While the top of development branches should build and run reliably, there is no guarantee of this. Users who encountered an error are welcomed to create a new bug report at GitHub Issues for this toolchain project.

Build environment

The toolchain building is being done by Crosstool-NG and so we inherit all the capabilities provided by that powerful and flexible tool. We recommend those interested in rebuilding of ARc GNU tools to become familiar with Crosstool-NG documentation available here: https://crosstool-ng.github.io/docs to better understand its capabilities and limitations. But in a nutshell, when all the environment is set (that's described in details below) what needs to be done is as easy as:

./ct-ng sample_name
./ct-ng build

Crosstool-NG is meant to be used in a Unix-like environment and so the best user experience could be achieved in up-to-date mainstream Linux distributions, which have all needed tools in their repositories.

Also Crosstool-NG is known to work on macOS with Intel processors and hopefully will soon be usable on macOS with ARM processors as well. That said ARC GNU cross-toolchain for macOS might be built natively on macOS. Or it's possible to build it in a canadian cross manner (see https://crosstool-ng.github.io/docs/toolchain-types) on a Linux host with the use of OSXCross as a cross-toolchain for macOS.

There're ways to build ARC GNU cross-toolchain on Windows as well, and the most convenient would be use of Windows Subsystem for Linux v2, WSL2 or any other full-scale virtual machine with Linux inside. Fortunately, though, it's possible to use the canadian-cross approach for Windows as well with use of MinGW cross-toolchain on Linux host. Moreover, even MinGW cross-toolchain might be built with Crosstool-NG right in place, limiting amount of external dependencies.

So our recommendation is to either use a pre-built toolchain for Linux, Windows or macOS (could be found on releases page) or build in a true Linux environment, be it a real Linux host or a virtual machine.

And due to requirements of some toolchain components for building from source as well as for execution of a prebuilt toolchain it's necessary to use up-to-date Linux distribution.

As of today, the oldest supported distributions are:

  • CentOS/RHEL 7
  • Ubuntu 18.04 LTS

Prerequisites

GNU toolchain for ARC has the same standard prerequisites as an upstream GNU toolchain as documented in the GNU toolchain user guide or on the GCC website

Autoconf

Starting from version 2023.03, Crosstool-NG which is used for building toolchains, requires Autoconf 2.71 instead of 2.67. It may not be available on old Linux distributions. In this case you can build it manually (use your own prefix):

wget https://ftp.gnu.org/gnu/autoconf/autoconf-2.71.tar.gz
tar -xf autoconf-2.71.tar.gz
cd autoconf-2.71
./configure --prefix=/tools/autoconf2.71
make
make install

Then configure your environment:

export PATH="/tools/autoconf2.71/bin:$PATH"

Ubuntu 18.04 and newer

sudo apt update
sudo apt install -y autoconf help2man libtool libtool-bin texinfo byacc flex libncurses5-dev zlib1g-dev \
                    libexpat1-dev texlive build-essential git wget gawk libncursesw5 \
                    bison xz-utils make python3 rsync locales

CentOS/RHEL 7.x

sudo yum install -y autoconf bison bzip2 file flex gcc-c++ git gperf \
                    help2man libtool make ncurses-devel patch \
                    perl-Thread-Queue python3 rsync texinfo unzip wget \
                    which xz

The latest Crosstool-NG may require building tools newer than tools which are shipped with CentOS 7 by default (for example, there are GCC 4.8.5 and Make 3.82). At least it's not enough anymore for building ARC toolchain for Windows hosts. In this case consider using centos-release-scl repository to install the latest tools:

# Install fresh tools
sudo yum install centos-release-scl
sudo yum install devtoolset-9

# Enable them in a new Bash session
scl enable devtoolset-9 bash

Fedora & CentOS/RHEL 8.x

Enabling "PowerTools" repository for CentOS/RHEL 8.x

Some packages like gperf, help2man & texinfo are not available in a base package repositories, instead they are distributed via so-called "PowerTools Repository", to enable it, do the following:

sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --set-enabled powertools

Then install all the packages in the same way as it is done for Fedora in the next section.

Packages installation in Fedora, CentOS/RHEL 8.x

sudo dnf install -y autoconf bison bzip2 diffutils file flex gcc-c++ git \
                    gperf help2man libtool make ncurses-devel patch \
                    perl-Thread-Queue python3 rsync texinfo unzip wget \
                    which xz

Locale installation for building uClibc

For building uClibc it is required to have en_US.UTF-8 locale installed on the build host (otherwise build fails, for details see #207). In case en_US.UTF-8 is missing the following needs to be done:

  • Install package with locales. In case of Debian or Debian-based Linux distributions it is locales.

  • Enable & generate en_US.UTF-8 locale

    # sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && locale-gen

Preparing Crosstool-NG

To simplify toolchain building process we use a powerful, flexible and rather user-friendly tool called "Crosstool-NG. In its nature it's a mixture of Makefiles and bash scripts which hide all the magic & complexity needed to properly configure, build & install all the components of the GNU toolchain.

Still, Crosstool-NG is distributed in sources and needs to be built before use. Though it is as simple as:

# Get the sources
git clone https://github.com/foss-for-synopsys-dwc-arc-processors/crosstool-ng.git

# Step into the just obtained source tree
cd crosstool-ng

# Optionally select its version of choice, for example the one used for creation of `arc-2021.09` release
git checkout arc-2021.09-release

# Configure & build Crosstool-NG
./bootstrap && ./configure --enable-local && make

Building the Toolchain

Once Crosstool-NG is built and ready for use it's very easy to get a toolchain of choice to be built. One just needs to decide on configuration options to be used for toolchain building or use one of the existing pre-defined settings (which mirror configuration of pre-built toolchains we distribute via https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/releases).

Crosstool-NG configuration: use pre-configured "samples"

The following pre-defined configurations (they are called "samples" on Crosstool's parlance) are available at the moment:

  1. snps-arc-arc700-linux-uclibc - Linux uClibc cross-toolchain for ARC700 processors for 64-bit Linux hosts
  2. snps-arceb-arc700-linux-uclibc - Linux uClibc cross-toolchain for ARC700 processors (big endian) for 64-bit Linux hosts
  3. snps-arc-archs-linux-gnu - Linux glibc cross-toolchain for ARC HS3x & HS4x processors for 64-bit Linux hosts
  4. snps-arceb-archs-linux-gnu - Linux glibc cross-toolchain for ARC HS3x & HS4x processors (big endian) for 64-bit Linux hosts
  5. snps-arc-archs-linux-uclibc - Linux uClibc cross-toolchain for ARC HS3x & HS4x processors for 64-bit Linux hosts
  6. snps-arceb-archs-linux-uclibc - Linux uClibc cross-toolchain for ARC HS3x & HS4x processors (big endian) for 64-bit Linux hosts
  7. snps-arc-archs-native-gnu - Linux glibc "native" toolchain from ARC HS3x & ARC HS4x processors
  8. snps-arc-elf32-win - Bare-metal cross-toolchain for wide range of ARCompact & ARCv2 processors (ARC600, ARC700, AEC EM & HS) for 64 -bit Windows hosts
  9. snps-arceb-elf32-win - Bare-metal cross-toolchain for wide range of ARCompact & ARCv2 processors (ARC600, ARC700, AEC EM & HS - big endian) for 64 -bit Windows hosts
  10. snps-arc-multilib-elf32 - Bare-metal cross-toolchain for wide range of ARCompact & ARCv2 processors (ARC600, ARC700, AEC EM & HS) for 64-bit Linux hosts
  11. snps-arceb-multilib-elf32 - Bare-metal cross-toolchain for wide range of ARCompact & ARCv2 processors (ARC600, ARC700, AEC EM & HS - big endian) for 64-bit Linux hosts
  12. snps-arc32-linux-uclibc - Linux uClibc cross-toolchain for ARC HS5x processors for 64-bit Linux hosts
  13. snps-arc32-native-uclibc - Linux uClibc "native" toolchain from ARC HS5x processors
  14. snps-arc64-snps-linux-gnu - Linux glibc cross-toolchain for for ARC HS6x processors for 64-bit Linux hosts
  15. snps-arc64-snps-native-gnu - Linux glibc "native" toolchain from ARC HS6x processors
  16. snps-arc64-unknown-elf - Bare-metal cross-toolchain for ARC HS6x processors for 64-bit Linux hosts

And to get Crosstool-NG configured with either of those samples just say: ./ct-ng sample_name. For example, to get bare-metal toolchain for ARCompact/ARCv2 processors say: ./ct-ng snps-arc-multilib-elf32.

⚠️ Please note though, all of these samples are meant to be used for building on a Linux host. And while some samples will work perfectly fine if they are used for Crosstool-NG configuration on say macOS host, those which employ so-called "canadian cross" build methodology (see if CT_CANADIAN=y is defined in the sample's crosstool.config) won't work on non-Linux hosts as they use existing cross-toolchain for the target host (MinGW32 if we build a cross-toolchain for Windows hosts or OSXCross if we build for macOS hosts).

Crosstool-NG configuration: manual tuning

If pre-defined "sample" doesn't meet one's requirements, it's possible to either fine-tune some existing sample or start over from scratch and make all the settings manually. For that just say ./ct-ng menuconfig and use menuconfig interface in the same way as it's done in many other projects like the Linux kernel, uClibc, Buildroot and many others.

⚠️ To start configuration from scratch, make sure .config file doesn't exist in the Crosstool's root directory or say ./ct-ng distclean.

The most interesting options for toolchain users might be:

Building a toolchain with Crosstool-NG

All the information above was on how to get Crosstool-NG prepared for operation and how to get it configured to perform a toolchain build with needed settings. And now, when all the preparations are done, it's required only to start build process with:

./ct-ng build

Building toolchain for Windows

Preparation for building ARC cross-toolchain for Windows host

To build a toolchain for Windows hosts it is recommended to do a "Canadian cross-compilation" on Linux, that is a toolchain for ARC targets that runs on Windows hosts is built on Linux host. Build scripts expected to be run in Unix-like environment, so it is often faster and easier to build toolchain on Linux, than do this on Windows using environments like Cygwin and MSYS. While those allow toolchain to be built on Windows natively this way is not officially supported and not recommended by Synopsys, due to severe performance penalty of those environments on build time and possible compatibility issue.

Some limitations apply:

  • Only bare metal toolchain can be built this way.
  • It is required to have toolchain for Linux hosts in the PATH for Canadian cross-build to succeed - it will be used to compile standard library of toolchain.

To do a canadian-cross toolchain on Linux, MinGW toolchain must be installed on the build host. There're muliple ways to get MinGW installed:

  • On Ubuntu 18.04 & 20.04 that can be done with: sudo apt install mingw-w64

  • On CentOS/RHEL 8.x it's a bit more challenging:

    sudo dnf -y install dnf-plugins-core
    sudo dnf config-manager --set-enabled powertools
    sudo dnf install -y mingw32-gcc
  • Or it could be built with help of that same Crosstool-NG:

    ./ct-ng x86_64-w64-mingw32
    ./ct-ng build

Please note, due to recent changes in Crosstool-NG it's required to do a tiny change in its configuration to escape a problem of missing libwinpthread-1.dll, see Crosstool-NG issue #1869 for more details. And required change consists of removal of CT_THREADS_POSIX option, i.e. in Crosstools-NG's menuconfig deselect it.

Building ARC cross-toolchain for Windows host

Once the MinGW is available on the build host just make sure its binaries are avaialble via a standard system path, or otherwise add path to them in local PATH environment variable and use snps-arc-elf32-win sample for Crosstool-NG configuration.

Alternatively it's possible to start from one of the other existing samples (for example snps-arc64-unknown-elf) and build it in a canadian cross manner with the following simple changes.

Run ./ct-ng menuconfig and select CT_CANADIAN=y as well as set CT_HOST="i686-w64-mingw32".

Then build the toolchain as usual with ./ct-ng build.

Usage examples

In all of the following examples, it is expected that GNU toolchain for ARC has been added to the user's PATH environment variable. Please note that built toolchain by default gets installed in the current users's ~/x-tools/TOOLCHAIN_TUPLE folder, where TOOLCHAIN_TUPLE is by default dynamically generated based on the toolchain type (bare-metal, glibc or uclibc), CPU's bitness (32- or 64-bit), provided vendor name etc.

For example:

  • With snps-arc-multilib-elf32 sample built toolchain will be installed in ~/x-tools/arc-snps-elf
  • With snps-arc64-unknown-elf sample built toolchain will be installed in ~/x-tools/arc64-snps-elf

Prefixes which start with arc- correspond to little endian toolchains. Prefixes which start with arceb- correspond to big endian toolchains. E.g., GDB for big endian ARCv2 baremetal toolchain is arceb-elf32-gdb. However, big endian tools are not available for ARCv3 yet.

Using nSIM simulator to run bare metal ARC applications

nSIM simulator supports GNU IO hostlink used by the libc library of bare metal GNU toolchain for ARC. nSIM option nsim_emt=1 enables GNU IO hostlink. nSIM simulator also supports semihosting, which is essential for ARC-V targets, more details can be found in nSIM documentation.

To start nSIM in gdbserver mode for ARC EM6:

$ $NSIM_HOME/bin/nsimdrv -gdb -port 51000 \
  -tcf $NSIM_HOME/etc/tcf/templates/em6_gp.tcf -on nsim_emt

And in second console (GDB output is omitted):

$ arc-elf32-gcc -mcpu=arcem -g --specs=nsim.specs hello_world.c
$ arc-elf32-gdb --quiet a.out
(gdb) target remote :51000
(gdb) load
(gdb) break main
(gdb) break exit
(gdb) continue
(gdb) continue
(gdb) quit

GDB also might execute commands in a batch mode so that it could be done automatically:

$ arc-elf32-gdb -nx --batch -ex 'target remote :51000' -ex 'load' \
                            -ex 'break main' -ex 'break exit' \
                            -ex 'continue' -ex 'continue' -ex 'quit' a.out

If one of the HS TCFs is used, then it is required to add -on nsim_isa_ll64_option to nSIM options, because GCC for ARC automatically generates double-world memory operations, which are not enabled in TCFs supplied with nSIM:

$ $NSIM_HOME/bin/nsimdrv -gdb -port 51000 \
  -tcf $NSIM_HOME/etc/tcf/templates/hs36.tcf -on nsim_emt \
  -on nsim_isa_ll64_option

nSIM distribution doesn't contain big-endian TCFs, so -on nsim_isa_big_endian should be added to nSIM options to simulate big-endian cores:

$ $NSIM_HOME/bin/nsimdrv -gdb -port 51000 \
  -tcf $NSIM_HOME/etc/tcf/templates/em6_gp.tcf -on nsim_emt \
  -on nsim_isa_big_endian

Default linker script of GNU Toolchain for ARC is not compatible with memory maps of cores that only has CCM memory (EM4, EM5D, HS34), thus to run application on nSIM with those TCFs it is required to link application with linker script appropriate for selected core.

When application is simulated on nSIM gdbserver all input and output happens on the side of host that runs gdbserver, so in "hello world" example string will be printed in the console that runs nSIM gdbserver.

Note the usage of nsim.specs specification file. This file specifies that applications should be linked with nSIM IO hostlink library libnsim.a, which is implemented in libgloss - part of newlib project. libnsim provides several functions that are required to link C applications - those functions a considered board/OS specific, hence are not part of the normal libc.a. To link application without nSIM IO hostlink support use nosys.specs file - note that in this case system calls are either not available or have stub implementations. One reason to prefer nsim.specs over nosys.specs even when developing for hardware platform which doesn't have hostlink support is that nsim will halt target core on call to function "exit" and on many errors, while exit functions nosys.specs is an infinite loop. For more details please see documentation.

Using EM Starter Kit to run bare metal ARC EM application

A custom linker script is required to link applications for EM Starter Kit. Refer to the section "Building an application" of our EM Starter Kit page: https://foss-for-synopsys-dwc-arc-processors.github.io/toolchain/baremetal/em-starter-kit.html

Build instructions for OpenOCD are available at its page: https://github.com/foss-for-synopsys-dwc-arc-processors/openocd/blob/arc-2021.09/doc/README.ARC

To run OpenOCD:

openocd -f /usr/local/share/openocd/scripts/board/snps_em_sk_v2.3.cfg

Compile test application and run:

$ arc-elf32-gcc -mcpu=em4_dmips -g --specs=emsk_em9d.specs simple.c
$ arc-elf32-gdb --quiet a.out
(gdb) target remote :3333
(gdb) load
(gdb) break main
(gdb) continue
(gdb) step
(gdb) next
(gdb) break exit
(gdb) continue
(gdb) quit

Using Ashling Opella-XD debug probe to debug bare metal applications

A custom linker script is required to link applications for EM Starter Kit. Refer to the section "Building an application" of our EM Starter Kit page: https://foss-for-synopsys-dwc-arc-processors.github.io/toolchain/baremetal/em-starter-kit.html For different hardware configurations other changes might be required.

The Ashling Opella-XD debug probe and its drivers are not part of the GNU tools distribution and should be obtained separately.

The Ashling Opella-XD drivers distribution contains gdbserver for GNU toolchain. Command to start it:

$ ./ash-arc-gdb-server --jtag-frequency 8mhz --device arc \
    --arc-reg-file <core.xml>

Where <core.xml> is a path to XML file describing AUX registers of target core. The Ashling drivers distribution contain files for ARC 600 (arc600-core.xml) and ARC 700 (arc700-core.xml). However due to recent changes in GDB with regards of support of XML target descriptions those files will not work out of the box, as order of some registers changed. To use Ashling GDB server with GDB starting from 2015.06 release, it is required to use modified files that can be found in this toolchain repository in extras/opella-xd directory.

Before connecting GDB to an Opella-XD gdbserver it is essential to specify path to XML target description file that is aligned to <core.xml> file passed to GDB server. All registers described in <core.xml> also must be described in XML target description file in the same order. Otherwise GDB will not function properly.

(gdb) set tdesc filename <path/to/opella-CPU-tdesc.xml>

XML target description files are provided in the same extras/opella-xd directory as Ashling GDB server core files.

Then connect to the target as with the OpenOCD/Linux gdbserver. For example a full session with an Opella-XD controlling an ARC EM target could start as follows:

$ arc-elf32-gcc -mcpu=arcem -g --specs=nsim.specs simple.c
$ arc-elf32-gdb --quiet a.out
(gdb) set tdesc filename toolchain/extras/opella-xd/opella-arcem-tdesc.xml
(gdb) target remote :2331
(gdb) load
(gdb) break main
(gdb) continue
(gdb) break exit
(gdb) continue
# Register R0 contains exit code of function main()
(gtb) info reg r0
(gdb) quit

Similar to OpenOCD hostlink is not available in GDB with Ashling Opella-XD.

Debugging applications on Linux for ARC

Compile application:

arc-linux-gcc -g -o hello_world hello_world.c

Copy it to the NFS share, or place it in rootfs, or make it available to target system in any way other way. Start gdbserver on target system:

[ARCLinux] # gdbserver :51000 hello_world

Start GDB on the host:

$ arc-linux-gdb --quiet hello_world
(gdb) set sysroot <buildroot/output/target>
(gdb) target remote 192.168.218.2:51000
(gdb) break main
(gdb) continue
(gdb) continue
(gdb) quit

Getting help

For all inquiries Synopsys customers are advised to use SolvNet. Everyone is welcome to open an issue against toolchain repository on GitHub.

toolchain's People

Contributors

abrodkin avatar amylaar avatar anthony-kolesov avatar apolyakov avatar calvinatintel avatar claziss avatar dirker avatar dmitriiburnaev avatar evgeniididin avatar falaleevms avatar fbedard avatar jeremybennett avatar kolerov avatar mischajonker avatar mrssima avatar nandub avatar nikitasobolev avatar pologovaa avatar qwersem avatar simonpcook avatar synopsys-arcoss-auto avatar t-j-teru avatar tkrasnukha avatar vineetgarc avatar vkremneva avatar wangnuannuan avatar yaroslavsadin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

toolchain's Issues

Problem with nSIM simulator

I am going to explain you exactly what I have done, and you say me If everything is correct:

I downloaded the simulator nSIM (but I cannot use it because I didn't knew that I needed more things).

so, I saw this website: "https://github.com/foss-for-synopsys-dwc-arc-processors/arc_gnu_eclipse/wiki" and then botton of the website I clicked in "Installation" and took me to another page, this website: "https://github.com/foss-for-synopsys-dwc-arc-processors/arc_gnu_eclipse/wiki/Installation", and like I use ubuntu (linux) I followed the steps, I downloaded Eclipse and then the CDT, and then I downloaded the plugins for eclipse in this website "https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/releases/tag/arc-2016.03" (arc_gnu_2016.03_ide_plugins.zip) and I installed the plugins through eclipse with the options "Install new Software", so I followed the steps of the this website, except the last paragraph where says "5. Installing plugin on Linux Host".

So do you think the step I have followed are correct?
Could you explain me how I can install "arc_gnu_2016.03_prebuilt_elf32_be_linux_install.tar.gz"?

Thank you for everything. I hope I explained well.

ARC GNU 2014.12 - GDB visbility vs. MPU

Hi all,

I post my question following suggestion by one of my Synopsys support contacts (Igor).
No CASE or STAR has been filed yet.

We use ARC GNU 2014.12 for the moment (toolchain and debugger).
We use ARC EM6 CPU w/ MPU option.
We don't use secure shield option.
We run an OpenRTOS on top, for the moment we stay supervisor mode the whole time.

I’d like to know if there may be possible issues with the native ARC GDB that the latter has not full visibility on the code or is able to circumvent e.g. MPU enforcements (that would be natural I’d say).

Indeed my very concrete problem is that in Eclipse IDE I can see ASM/C intermixed code in disassembly window until I join our main() function, follows some configurations and disassembly starts to behave differently just after MPU has been configured.
Differently means: disassembly code remains empty, I only see C lines but address/position link seem being preserved.

Below two samples (two projects but exactly same toolchain, debugger, IDE etc. versions) – one project I can properly disassemble, the other not.
You can observe when OK inst=”…” string is not empty, when not then it’s empty.

Disassemble OK

(gdb)
292-data-disassemble -s 0x1004c8 -e 0x1004e8 -- 0
292^done,asm_insns={address="0x001004c8",func-name="Reset",offset="8",inst="sub_s sp,sp,4"},{address="0x001004ca",func-name="Reset",offset="10",inst="ld r2,[0x00801800]"},{address="0x001004d2",func-name="Reset",offset="18",inst="st r2,[fp,-4]"},{address="0x001004d6",func-name="Reset",offset="22",inst="bl 0x10040c \n"},{address="0x001004da",func-name="Reset",offset="26",inst="bl 0x107174 <g_InitLogUart>\n"},{address="0x001004de",func-name="Reset",offset="30",inst="bl 0x1007b0 <A_Test>\n"},{address="0x001004e2",func-name="Reset",offset="34",inst="bl 0x101f54 <B_Test>\n"},{address="0x001004e6",func-name="Reset",offset="38",inst="bl 0x103398 <C_Test>\n"}

Disassemble KO

(gdb)
850-data-disassemble -s 0x11cb54 -e 0x11cb74 -- 0
850^done,asm_insns={address="0x0011cb54",func-name="prvIdleTask",offset="12",inst=" "},{address="0x0011cb58",func-name="prvIdleTask",offset="16",inst=" "},{address="0x0011cb5c",func-name="prvIdleTask",offset="20",inst=" "},{address="0x0011cb60",func-name="prvIdleTask",offset="24",inst=" "},{address="0x0011cb64",func-name="prvIdleTask",offset="28",inst=" "},{address="0x0011cb68",func-name="prvIdleTask",offset="32",inst=" "},{address="0x0011cb6c",func-name="prvIdleTask",offset="36",inst=" "},{address="0x0011cb70",func-name="prvResetNextTaskUnblockTime",offset="0",inst=" "}

Thanks & Br Christophe

strip command strips .debug_frame and breaks stack unwinding

Buildroot runs "strip --strip-unneeded" on all .ko files (i.e. Linux kernel modules). Unfortunately, this also removes the .debug_frame section which is required for stack unwinding in the Linux kernel. The Linux kernel Makefile uses "strip --strip-debug" which also removes .debug_frame.

The only way I found to avoid this is to use the following objcopy command instead of the strip command. It's not very pretty, though. Is there a better way? I wish strip had a --keep-section= command line argument.

objcopy --remove-section=.debug_aranges --remove-section=.debug_info --remove-section=.debug_abbrev --remove-section=.debug_line --remove-section=.debug_str --remove-section=.debug_loc --remove-section=.debug_ranges

Add option --strip to ./build-all.sh

Add option --strip, that will strip host executables from debug symbols. It will run strip -g on ./arc-elf32, ./arc-linux-uclibc, ./bin, ./libexec/gcc/4.8.0. Debug symbols doesn't make a lot of sense for end users, but they occupy 2/3 of space in compiled toolchain.

I would suggest make this a default value (providing --no-strip as a negation), but would be happy even if it would be an optional choice.

Make stack bigger in arc-nsim.exp

Claudiu told me that stack size should be 1024m for arc-nsim.exp board instead of current 32m. This is required for some memory intensive GCC tests that currently fail. Should be checked and confirmed before commiting.

lp_count register width

It has been found that on compilations for the Intel Quark SE C1000 Sensor Core (ARC) that while the lp_count register is only 16-bits (ie, LPC_SIZE = 16), the compiler seems to assume the register is always 32-bits (assumes LPC_SIZE = 32). As such, some loops are being incorrectly compiled to use the lp_count register for loops of greater than 65535 iterations long. It would be good if the ARC GCC compiler could account for the parameterizable nature of the width of lp_count register, allowing for proper compilations to not use the register when the loop iterates more than the size of the register can account for.

Compiling __attribute__((naked)) functions automatically adds jump

Hi,

I'm using the prebuild toolchain 2006.03 with the gcc 4.8.5. I want to compile a single instruction without any additional instructions added by the compiler. Every instruction I try to compile is followed by a nop and j_s [blink] instruction, see the following example:

  • original source code
__attribute__((naked)) void 
test(void) { asm("mov r0, r1 \n"); }
  • disassembly of the resulting object file using objdump
   0:	200a 0040           	mov	r0,r1
   4:	78e0                	nop_s
   6:	7ee0                	j_s	[blink]

How can I compile the instruction above without the "additional" nop and j_s instruction?

objdump crashes when using reuse of opcodes

In the following configuration, objdump crashes.

testcode.s

 .extInstruction abcd, 7, 0x21, SUFFIX_NONE, SYNTAX_3OP
        .extInstruction efgh, 7, 0x20, SUFFIX_NONE, SYNTAX_2OP
        .extInstruction abab, 7, 0x21, SUFFIX_NONE, SYNTAX_2OP

        .section .text
        .global __start
__start:
      mov r1,0x0
      mov r2,0x1
      mov r3,0x100

      abcd r1,r2,r3
      efgh r2,r3
      swi

arc-elf32-gcc -mcpu=arcem -nostdlib testcode.s
arc-elf32-objdump -d a.out

throws this error.

a.out: file format elf32-littlearc

Disassembly of section .text:

00000100 <__start>:
100: 214a 0000 mov r1,0
104: 224a 0040 mov r2,0x1
108: 238a 0004 mov r3,256
arc-elf32-objdump: /home/akolesov/pub/jenkins_root/akolesov-lab/workspace/gnu_release/binutils/opcodes/arc-dis.c:473: print_insn_arc: Assertion `opcode != ((void *)0)' failed.
10c: Aborted

It seems like the issue is related to duplicate sub-opcode (0x21). However, as the two different instruction extensions have different opcode counts, this should be a valid case.

64bit division support in toolchain

Hi,

I am using ARC GNU 2016.03 toolchain. Toolchain has been compiled locally for Linux 4.6.3 kernel. Target platform is ARC770D.

I am seeing following warning while linking our code:
4.6.3/../linux/modules/lib/modules/4.6.3/extra/XXX.ko needs unknown symbol __udivdi3

I suspect this is originating from div_u64() in our code. Same sources work fine with older toolchain.
Also quick grep inside toolchain directory shows __udivdi3 maybe part of toolchain already:

share/info/gccint.info -
-- Runtime Function: unsigned long __udivdi3 (unsigned long A, unsigned long B)

Are we missing linking any(math?) library or so?

Thanks,
Avinash

EM Starter Kit

I have recently started to try to use an em starter kit,
I am trying to folow the instructions here
https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/wiki/EM-Starter-Kit

Here are my issues

  1. I have installed
    https://github.com/foss-for-synopsys-dwc-arc-processors/arc_gnu_eclipse/releases/download/arc-2014.08/ARC_GNU_IDE_2014.08_win_install.exe

  2. I tried to look for the directory extras/em_starter_kit - but cannot find this in the installation tree ?
    where are the linker scripts such as em5.lds ?

  3. I then compiled an app using the instructions here
    https://github.com/foss-for-synopsys-dwc-arc-processors/arc_gnu_eclipse/wiki/Building-a-C-Project
    but when I try to debug following these instructions

    openocd -s C:\arc_gnu\share\openocd\scripts -f C:\arc_gnu\share\openocd\scripts\target\snps_em_sk.cfg
    I get a mesage that the file C:\arc_gnu\share\openocd\scripts\target\snps_em_sk.cfg does not exist, which is correct, but there is a file called C:\arc_gnu\share\openocd\scripts\target\snps_em_sk_fpga.cfg
    so I presume this is a typo
    so next I try
    openocd -s C:\arc_gnu\share\openocd\scripts -f C:\arc_gnu\share\openocd\scripts\target\snps_em_sk_fpga.cfg
    and now I get

C:\Users\moore>openocd -s c:\arc_gnu\share\openocd\scripts -f c:\arc_gnu\share\o
penocd\scripts\target\snps_em_sk_fpga.cfg
Open On-Chip Debugger 0.9.0-dev-g8be39d0-dirty (2014-12-15-17:16)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.sourceforge.net/doc/doxygen/bugs.html
Error: session's transport is not selected.
Info : session transport was not selected, defaulting to JTAG
Error: Debug adapter doesn't support any transports?
Runtime Error: embedded:startup.tcl:20:
in procedure 'script'
at file "embedded:startup.tcl", line 58
in procedure 'jtag' called at file "c:\arc_gnu\share\openocd\scripts\target\snps
_em_sk_fpga.cfg", line 13
in procedure 'default_to_jtag' called at file "embedded:startup.tcl", line 165
in procedure 'transport' called at file "embedded:startup.tcl", line 157
in procedure 'ocd_bouncer'
at file "embedded:startup.tcl", line 20

I am completely lost how to get a simple hello world compiled, linked and running in a debugger.
is anybody able to guve guidance ?

I would much prefer command line tools rather than an IDE

Thx
Lee

ARC GNU 2014.12 - When optimizing (-O1,2,s) getting Assembler message of bad instruction

Hello Anthony,

I've today a strange behavior when we compile our code.
We are using ARC GNU 2014.12 on a Windows 7.1-SP1-x64 host machine.
We use a ARCv2 EM6 CPU.

We compile code with no optimisation at all for the moment in order to have best debug experience.
Now we try to simply switch on optimisation switches (e.g. -O1, -O2, -Os) and some files build with no pain but suddenly build stops with the message:

Build/obj/MyFile.s: Assembler messages:
Build/obj/MyFile.s:163: Error: bad instruction add1 r24,@__MyLinkerSymbol@sda,gp
Build/obj/MyFile.s:163: Error: bad instruction add1 r1,@__MyLinkerSymbol@sda,gp
Build/obj/MyFile.s:163: Error: bad instruction add_s r1,r1,@__MyLinkerSymbol@sda
make: *** [Build/obj/MyFile.s] Error 1
make: *** Waiting for unfinished jobs....

Error is same independently if I do -O anything. I already am looking to GNU and ARC documentation to understand but prefer to ask here as well too.

Thanks for a quick answer if possible.

Best regards Christophe

How to build gdb without python?

Can anyone tell me how to build the arc_gnu 2015.06 toolchain without python in the gdb?

In version 2014.12 I simply added --with-python=no to the configure command in build-elf32.sh at line 201, but I cannot figure out how to get the same result in 2015.06.

I see that toolchain/Makefile.release calls build-all.sh with --config-extra '--with-python=no' but when I add that to my build-all.sh command I get this error:

START ELF32: Fri Aug  7 01:45:50 BST 2015
Installing in /scratch/arc-elf32/Linux32/arc-elf32/
Building binutils ...
configure: error: invalid variable name: `'--with-python'

Any help would be appreciated.

Thanks

Jim Straus

Binary hangs with static linking

Hello,

I compiled the whole toolchain and try to run a program on an ARC device, everything worked fine but when I tried the static linking, the program seemed to « hang ».

My example program is the Busybox example from here

#include <stdio.h>

int main(int argc, char *argv)
{
  printf("Hello world!\n");
  sleep(999999999);
}

arc-linux-uclibc-gcc test.c -o test is working, but
arc-linux-uclibc-gcc -static test.c -o test is not working, nothing is printed and the program does not finish.

Do you have any idea where it could come from?

GDB fix for PR remote/17028

Hi Support

There was a fix to a bug in gdb 2014-06-11, PR remote/17028
How can I see if this is incorporated into your stream ?

I tried building off the gdb head configuring for target arc-elf32, but the configure/make fails, so I presume I need to take the GDB from here.

Thx
Lee

Is there any detailed comparison between Metaware and GCC toolchain for ARC?

Hi,

I wonder if someone already did some profiling/comparison between the two toolchains. What is the main disadvantage of GCC over Metaware? I see that using the GCC toolchain has the advantage that once learned it can be used for many architectures (Cortex M0).
What features does Metaware have, that GCC has not? For example is there a similar approach in GCC like Metaware debugger Rascal to connect the debugger to an RTL simulation?
How fast is the code and what's the difference in the code density?

Thanks!
Maik

make: *** No rule to make target `install-pdf-{binutils,ld,gas}'. Stop.

I following the every detail toolchain/README.md
build with:
./build-all.sh --no-uclibc --install-dir ~/arc_bin --cpu arcem --no-multilib
gives this error
make: *** No rule to make target `install-pdf-{binutils,ld,gas}'. Stop.

I read the ./build-all.sh and append --no-pdf option, build is success.

SIGILL while returning from usermode

Hi,

We are porting Linux kernel on our internal platform to 4.7(gcc version 4.8.5 (ARCompact ISA Linux uClibc toolchain built on 20160906) ).

I

We see this peculiar crash where init is launched from kernel boot but killed immediately. I tried with my sample init which prints messages to console and returns. SIGILL is seen in this case as well.
I think this SIGILL is caused because of stack corruption and is seen after returning from usermode.

Console log also has information about TASK_SIZE, VMALLOC regions etc.

Do you see any issue with attached config file which maybe causing this crash?

arc_4.7_config.txt
stacktrace_log2.txt

GCC arc-2016.09-eng010 accessing bitfields

I copied a small example program below that demonstrates the issue. I basically get different outputs depending on whether I compile it with "-O0" vs "-O".

struct ubifs_budget_req {
    unsigned int fast:7;
    unsigned int new_ino_d:13;
};

    if (req->new_ino_d & 7) {
        printf("new_ino_d & 7\n");
    }

If I compile the example with optimizations turned off ("-O0"), it generates the following good code:

       if (req->new_ino_d & 7) {
   103ec:       13fc b002               ld      r2,[fp,-4]
   103f0:       8240                    ld_s    r2,[r2,0]
   103f2:       ba27                    lsr_s   r2,r2,0x7
   103f4:       bacc                    bmsk_s  r2,r2,0xc
   103f6:       7a50                    extw_s  r2,r2
   103f8:       bac2                    bmsk_s  r2,r2,0x2
   103fa:       7a4b                    tst_s   r2,r2
   103fc:       f207                    beq_s   1040a <fff+0x3a>

It right shifts the value by 7 bits to get rid of the value of "fast" and to align new_ino_d with the LSB of r2. It then masks the least significant 13 bits and finally tests the least significant 3 bits.

If I compile the same code with "-O", it generates different code:

        if (req->new_ino_d & 7) {
   103e4:       8540                    ld_s    r2,[r13,0]
   103e6:       228b 823e               tst     r2,-120

-120 is equivalent to 0xffffff88, which I think is the incorrect bitmask to test for. If I'm not mistaken, 0x380 would be the correct bitmask i.e. ignore the lower 7 bits (".fast") and test the next 3 bits.

The example code has been lifted from the Linux kernel from fs/ubifs/budget.c: ubifs_assert(!(req->new_ino_d & 7));

struct ubifs_budget_req {
    unsigned int fast:7;
    unsigned int new_ino_d:13;
};

int printf(const char *format, ...);

void fff(struct ubifs_budget_req *req) {
    printf("Entering\n");
    if (req->new_ino_d & 7) {
        /* It should not print the following line because .new_ino_d is
         * 0 but it does so on ARC when compiling with "-O" */
        printf("new_ino_d & 7\n");
    }
}

int main(int argc, char *argv[]) {
    struct ubifs_budget_req req = {
        .fast = 8,
        .new_ino_d = 0,
    };
    fff(&req);
    return 0;
}

Differences between mainline GCC and Synopsys GCC

I am curious what is the principal difference between the GCC maintained by Synopsys and the upstreamed version. The Synopsys version seems to be still at GCC 4.8.5, but there is already GCC 6.2.x with ARC support.

Toolchain build failing for 2014.12 tag on Linux

Hello,

Was the tag moved recently? Our daily automated builds have failed since May 23rd.
I would appreciate your help.

Thank you,
Calvin

$ mkdir arc_toolchain
sys_maker@mkslavel64:~$ cd arc_toolchain/
sys_maker@mkslavel64:~/arc_toolchain$ git clone https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain.git
Cloning into 'toolchain'...
remote: Counting objects: 2408, done.
remote: Total 2408 (delta 0), reused 0 (delta 0), pack-reused 2408
Receiving objects: 100% (2408/2408), 1.56 MiB | 1.01 MiB/s, done.
Resolving deltas: 100% (1658/1658), done.
Checking connectivity... done.
sys_maker@mkslavel64:~/arc_toolchain$ cd toolchain/
sys_maker@mkslavel64:~/arc_toolchain/toolchain$ git checkout arc-2014.12
Note: checking out 'arc-2014.12'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at e9686ba... Create arc-versions.sh for tag arc-2014.12
sys_maker@mkslavel64:~/arc_toolchain/toolchain$ ./arc-clone-all.sh 
Cloned directories will be created in /home/sys_maker/arc_toolchain
Checking we can write in /home/sys_maker/arc_toolchain
Cloning cgen...
- successfully cloned ARC cgen repository
Cloning binutils...
- successfully cloned ARC binutils repository
Cloning gcc...
- successfully cloned ARC gcc repository
Cloning gdb...
- successfully cloned ARC gdb repository
Cloning newlib...
- successfully cloned ARC newlib repository
Cloning uClibc...
- successfully cloned ARC uClibc repository
Cloning linux...
- successfully cloned ARC linux repository
All repositories cloned
- full logs in /home/sys_maker/arc_toolchain/logs-4.8/clone-all-2015-05-29-2231.log
sys_maker@mkslavel64:~/arc_toolchain/toolchain$ ./build-all.sh --cpu arcem --no-uclibc --sim --no-pdf --rel-rpaths --config-extra "--enable-sim-endian=no --with-python=no LDFLAGS=-static"
Checking out GIT trees ...
ERROR: Failed to checkout GIT versions of tools
- see /home/sys_maker/arc_toolchain/logs-4.8/all-build-2015-05-29-2322.log
sys_maker@mkslavel64:~/arc_toolchain/toolchain$ 

At the end of all-build-2015-05-29-2322.log is this:

Checking out GIT trees
======================
Checking out branch/tag arc-2014.12 of cgen
  fetching branches
  fetching tags
  checking out arc-2014.12
Note: checking out 'arc-2014.12'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at eda052d... Check in ARCompact simulator.  A valid configuration is arc-elf. This is not quite finished and has most likely a few files that are obsolete & not used, but it's good enough to run gcc regression tests.
  pulling latest version
fatal: No remote repository specified.  Please, specify either a URL or a
remote name from which new revisions should be fetched.

Relocation error with offset +/-2

Hi,

In some point in my code, I get a crash because of an illegal instruction (ECR=0x020000).

After investigating I found out that the linker was incorrectly calculating a relocation.
I could strip out my code to the bare minimum and still get the error.

I am using (compiled from the github sources):
GNU assembler (ARCompact/ARCv2 ISA elf32 toolchain arc-2016.09-rc1) 2.27.51.20161017

Here is the minimal code (not making any sense but clearly showing what is wrong):

file arc1:

	.text
	.global	__start
	.extAuxRegister AUX_DCCM, 0x18, r|w
__start:
	mov_s r0, 0x90000000
	sr r0, [AUX_DCCM]
	mov_s    r0,16
	mov_s    r1,0
	mov_s    r2,1
loop:
	add_s    r0,r0,1
	brlo r0,52,loop

file arc2:

	.text
	.global func
func:
	breq    r5, 0, bug
	st      r5, [r0, 24]
	;nop_s                    ; <- THIS IS MY WORKAROUND
bug:
	st      r3, [gp, mysda@sda]

.section .sdata, "ax"
mysda:

I compile with:

arc-elf32-as -c -mcpu=arcem -mcode-density -o arc1.o arc1.s
arc-elf32-as -c -mcpu=arcem -mcode-density -o arc2.o arc2.s
arc-elf32-ld -o out.elf arc1.o arc2.o

Here is the object file:
arc-elf32-objdump -d arc2.o | grep -B 4 -A 1 bug\>:

00000000 <func>:
   0:   0d09 0010               breq.nt r5,0,8 <bug>
   4:   1818 0140               st      r5,[r0,24]
00000008 <bug>:
   8:   1a00 30c0               st      r3,[gp]

In the object file, everything looks fine. The breq jump to bug is using the right offset (but maybe the decoding is wrong).

The linked elf is:
arc-elf32-objdump -d out.elf | grep -B 4 -A 1 bug\>:

00000116 <func>:
 116:   0d09 0010               breq.nt r5,0,11c <func+0x6>
 11a:   1818 0140               st      r5,[r0,24]
0000011e <bug>:
 11e:   1a00 b0c0               st      r3,[gp,-256]

Here the bug is showing up: the jump is targeting the address 2 bytes before bug (though binary code is the same). Note that I have not run this code, but in my original program, I can correct the problem by inserting a nop_s just before.

I also see that there is an incorrect -256 data offset in the last instruction.

If you remove some instructions in the file arc1.s, the bad offset may disappear... and actually in my real code, the problem seems to appear or disappear randomly depending on unrelated modifications.

This is a big problem for me, even if I have a local workaround, as I do not want to put nop_s everywhere in my code to prevent all possible problems. The nop_s is also not correcting the -256 problem above, though I have not seen this yet in my program.

Useless routines used instead of optimized instructions

Hello,

When compiling, gcc is not using expected ARC instructions despites the options I provide.

All the options I am using are:

arc-elf32-gcc
-fcommon
-mav2em
-mlong-calls
-mno-sdata
-mcpu=arcem
-mdiv-rem
-mbarrel-shifter
-mspfp
-mdpfp
-mdiv-rem
-mnorm
-mswap
-mcode-density
-mmpy-option=wlh2

During linking process (with my own script and libraries), the first problem I have is that I get many messages (for nearly each function) like:

undefined reference to `__st_r13_to_r15'
undefined reference to `__ld_r13_to_r15'
undefined reference to `__ld_r13_to_r15_ret'

It seems that instead of using ENTER_S and LEAVE_S instructions to save/restore multiple regsiters on the stack, gcc is using librayr routines. Because I am using -mcode-density, I would expect gcc to use the appropriate instructions. Note also that this problem disappear if I use -mno-millicode, but the code generated is not optimized at all.

The second problem I also have is:

undefined reference to `_mpymu'
undefined reference to `_mpyu'

Again, because I am using -mmpy-option=wlh2, I would expect gcc not to use separate multiply routines.

Missing function definitions in newlib/libc/sys/arc/syscalls.c

The stat_r and times_r function definitions are missing in newlib/libc/sys/arc/syscalls.c. The result is that an attempt to link an object file with a call to stat() or times() results in an "undefined reference" error.

Looking at newlib/libc/sys/arc/sys/syscalls.h it appears there is underlying support for these functions, so I am thinking this was just an oversight.

To recreate the problem using the prebuilt toolchain from https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/releases/download/arc-2014.12/arc_gnu_2014.12_prebuilt_elf32_le_linux_install.tar.gz do the following:

echo '

include <sys/stat.h>

include <sys/times.h>

include <sys/types.h>

include <unistd.h>

int main() {

struct stat statBuf;
stat("file", &statBuf);

struct tms tmsBuf;
times(&tmsBuf);

}
' > test.c
arc-elf32-gcc test.c

which gives the following result:

/tmp/arc_gnu_2014.12_prebuilt_elf32_le_linux_install/bin/../lib/gcc/arc-elf32/4.8.3/../../../../arc-elf32/lib/libc.a(lib_a-sysstat.o): In function stat': /home/akolesov/ws/arc-2014.12/unisrc-4.8/newlib/libc/syscalls/sysstat.c:12: undefined reference to_stat_r'
/tmp/arc_gnu_2014.12_prebuilt_elf32_le_linux_install/bin/../lib/gcc/arc-elf32/4.8.3/../../../../arc-elf32/lib/libc.a(lib_a-systimes.o): In function times': /home/akolesov/ws/arc-2014.12/unisrc-4.8/newlib/libc/syscalls/systimes.c:10: undefined reference to_times_r'
collect2: error: ld returned 1 exit status

Thanks,

Jim Straus

Building toolchain for Windows on Linux32

Hello,

I'm getting error while building the toolchains for Windows on Linux 32 host.

cd toolchain
patch -p1 < windows-installer/  build-elf32_windows.patch
patching file build-elf32.sh
Hunk #1 succeeded at 206 (offset 1 line).
Hunk #2 succeeded at 222 (offset 1 line).
./build-all.sh --cpu arc700 --no-uclibc --no-sim --no-pdf --rel-rpaths --config-extra '--enable-sim-endian=no --with-python=no LDFLAGS=-static' --no-unisrc --no-external-download --no-auto-pull --no-auto-checkout
Checking out GIT trees ...
Will not download external dependencies
START ELF32: Mon Apr 13 00:11:45 UTC 2015
Installing in /var/jenkins/workspace/ARC_Toolchain-Windows/INSTALL
Configuring tools ...
  finished configuring tools
Building tools ...
ERROR: tools build failed.
ERROR: arc-elf32- tool chain build failed.

logs-4.8/elf32-build-2015-04-13-0011.log showed these errors:
http://pastebin.com/Eeb1L7Mq

configure: error: in `/var/jenkins/workspace/ARC_Toolchain-Windows/bd-4.8-elf32/intl':
configure: error: C compiler cannot create executables
See `config.log' for more details.
make: *** [configure-intl] Error 1
make: *** Waiting for unfinished jobs....
checking for library containing strerror... configure: error: Link tests are not allowed after GCC_NO_EXECUTABLES.
make: *** [configure-libiberty] Error 1

and bd-4.8-elf32/intl/config.log showed these errors:
http://pastebin.com/weiN7YVe

gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) 
configure:2958: $? = 0
configure:2947: gcc -V >&5
gcc: error: unrecognized command line option '-V'
gcc: fatal error: no input files
compilation terminated.
configure:2958: $? = 4
configure:2947: gcc -qversion >&5
gcc: error: unrecognized command line option '-qversion'
gcc: fatal error: no input files
compilation terminated.
configure:2958: $? = 4
configure:2978: checking for C compiler default output file name
configure:3000: gcc -g -O2 -D__USE_MINGW_ACCESS  -static-libstdc++ -static-libgcc -static -Wl,--stack,12582912 conftest.c  >&5
/usr/bin/ld: unrecognized option '--stack'
/usr/bin/ld: use the --help option for usage information
collect2: error: ld returned 1 exit status
configure:3004: $? = 1
configure:3041: result: 
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| /* end confdefs.h.  */
| 
| int
| main ()
| {
| 
|   ;
|   return 0;
| }
configure:3047: error: in `/var/jenkins/workspace/ARC_Toolchain-Windows/bd-4.8-elf32/intl':
configure:3051: error: C compiler cannot create executables
See `config.log' for more details.

I've also looked at bd-4.8-elf32/config.log and found these:
http://pastebin.com/10J9Wp6d

configure:6039: checking for version 0.17.0 of CLooG
configure:6056: gcc -c -g -O2 -DCLOOG_INT_GMP   -I$$r/$(HOST_SUBDIR)/gmp -I$$s/gmp -I$$r/$(HOST_SUBDIR)/mpfr/src -I$$s/mpfr/src -I$$s/mpc/src   conftest.c >&5
conftest.c:10:27: fatal error: cloog/version.h: No such file or directory
 #include "cloog/version.h"
                           ^
compilation terminated.
configure:6056: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define LT_OBJDIR ".libs/"
| /* end confdefs.h.  */
| #include "cloog/version.h"
| int
| main ()
| {
| #if CLOOG_VERSION_MAJOR != 0     || CLOOG_VERSION_MINOR != 17     || CLOOG_VERSION_REVISION < 0
|     choke me
|    #endif
|   ;
|   return 0;
| }
configure:6062: result: no
configure:6081: checking for version 0.18.0 of CLooG
configure:6098: gcc -c -g -O2 -DCLOOG_INT_GMP   -I$$r/$(HOST_SUBDIR)/gmp -I$$s/gmp -I$$r/$(HOST_SUBDIR)/mpfr/src -I$$s/mpfr/src -I$$s/mpc/src   conftest.c >&5
conftest.c:10:27: fatal error: cloog/version.h: No such file or directory
 #include "cloog/version.h"
                           ^
compilation terminated.
configure:6098: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define LT_OBJDIR ".libs/"
| /* end confdefs.h.  */
| #include "cloog/version.h"
| int
| main ()
| {
| #if CLOOG_VERSION_MAJOR != 0     || CLOOG_VERSION_MINOR != 18     || CLOOG_VERSION_REVISION < 0
|     choke me
|    #endif
|   ;
|   return 0;
| }
configure:6104: result: no

Finally, there were also some errors in bd-4.8-elf32/libiberty/config.log:
http://pastebin.com/a5uvdabp

gcc: error: unrecognized command line option '-V'
gcc: fatal error: no input files
compilation terminated.
configure:3112: $? = 4
configure:3101: gcc -qversion >&5
gcc: error: unrecognized command line option '-qversion'
gcc: fatal error: no input files
compilation terminated.
configure:3112: $? = 4
configure:3128: gcc -o conftest -g -O2 -D__USE_MINGW_ACCESS  -static-libstdc++ -static-libgcc -static -Wl,--stack,12582912 conftest.c  >&5
/usr/bin/ld: unrecognized option '--stack'
/usr/bin/ld: use the --help option for usage information
collect2: error: ld returned 1 exit status
configure:3666: gcc -c -g -O2 -D__USE_MINGW_ACCESS  conftest.c >&5
conftest.c:15:3: warning: left shift count >= width of type [enabled by default]
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
   ^
conftest.c:15:3: warning: left shift count >= width of type [enabled by default]
conftest.c:16:10: warning: left shift count >= width of type [enabled by default]
          && LARGE_OFF_T % 2147483647 == 1)
          ^
conftest.c:16:10: warning: left shift count >= width of type [enabled by default]
conftest.c:15:7: error: variably modified 'off_t_is_large' at file scope
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
       ^
configure:3666: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| /* end confdefs.h.  */
| #include <sys/types.h>
|  /* Check that off_t can represent 2**63 - 1 correctly.
|     We can't simply define LARGE_OFF_T to be 9223372036854775807,
|     since some C++ compilers masquerading as C compilers
|     incorrectly reject 9223372036854775807.  */
| #define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
|   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
|              && LARGE_OFF_T % 2147483647 == 1)
|             ? 1 : -1];
| int
| main ()
| {
| 
|   ;
|   return 0;
| }
configure:3690: gcc -c -g -O2 -D__USE_MINGW_ACCESS  conftest.c >&5
configure:3690: $? = 0
configure:3698: result: 64
configure:3786: checking how to run the C preprocessor
configure:3817: gcc -E  conftest.c
configure:3817: $? = 0
configure:3831: gcc -E  conftest.c
conftest.c:10:28: fatal error: ac_nonexistent.h: No such file or directory
 #include <ac_nonexistent.h>
                            ^
compilation terminated.
configure:3831: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define _FILE_OFFSET_BITS 64
| /* end confdefs.h.  */
| #include <ac_nonexistent.h>
configure:3856: result: gcc -E
configure:3876: gcc -E  conftest.c
configure:3876: $? = 0
configure:3890: gcc -E  conftest.c
conftest.c:10:28: fatal error: ac_nonexistent.h: No such file or directory
 #include <ac_nonexistent.h>
                            ^
compilation terminated.
configure:3890: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""
| #define PACKAGE_TARNAME ""
| #define PACKAGE_VERSION ""
| #define PACKAGE_STRING ""
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define _FILE_OFFSET_BITS 64
| /* end confdefs.h.  */
| #include <ac_nonexistent.h>

and more.

Do you have a guide on how to cross compile for Windows?
Or can you spot what I'm doing wrong?

I'd appreciate your help.

Thank you,
Calvin

Compatibility between ARC gnu chain and Synopsys MetaWare C/C++ compiler

I am wondering if I can use the ARC gdb to debug the code been generated by MetaWare compiler , I made a first try and had a failure in DWARF parsing
arc_gnu_2014.08_sources/unisrc-4.8/gdb/dwarf2-frame.c:683: internal-error: Unknown CFI encountered.
Any idea how to proceed with this issue

Thanks , Oleg Raikhman

Issues building emstarter kit examples

Hi

I am trying to build the em starter examples (fibonacci) and I have hit two issues

  1. The Makefiles for gnu reference MAKE=gmake, but the ide only contains make and not gmake
  2. The crt0.s fails to compile

do you have access to these examples, I am not sure if I am able to provide them directly

Thx
Lee

ARC GDB client tries to load unloadable sections in .elf file

hi

we try to work with GDB for connecting to an Ashling GDB server connected by JTAG to our ARC core.

in the code we have a section marked by "SECTIONS" as "NOLOAD".

the mcc generated .elf file has this section marked as loadable with filesz 0, which as far as I understand indeed marks this
section as not loadable.
the .elf file works fine with metaware debugger and similar cases were ok also with Lauterbach.

when we use the GDB and run "load" command, we see that it tries to load this section, causing corruption and disables us
from working with it.
this happens both from eclipse toolchain and from a stand-alone GDB client by synopsys, both windows or unix.

I also opened a solvenet case with synopsys on that (number 8000784130) and they confirmed on their side that it seems like a GDB issue after running a small test case they created, they are working on a fix for that.

how can we confirm and fix it on the toolchain here as well?

thanks,
Noam

Error in gcc doc, arc builtins

There is a small error in the doc for builtin __builtin_arc_sr()

file: gcc/gcc/doc/extend.texi

__builtin_arc_sr (unsigned int auxr, unsigned int val)
The first argument, auxv, is the address of an auxiliary register, the second argument, val, is a compile time constant to be written to the register. Generates:

      sr  auxr, [val]

There is a mismatch between first and second argument. Correct doc should be:

__builtin_arc_sr (unsigned int val, unsigned int auxr)

gdb option 'set arc opella-target arcem'

Hi Support,

I now have rudimentary debugging working with openocd and gdb.

I want to see the auxiliary registers of the ARC EM, I try to do this by the command
(gdb) set arc opella-target arcem

This allows me to see the auxiliary registers by typing
(gdb) info all-register

but I can no longer step the target, typing no longer has an effect
(gdb) stepi

any suggestions ?
my full set of gdb commands are as follows (client is a different machine to the host running openocd):
(gdb) set remotetimeout 60
(gdb) target remote win6403:3333
(gdb) load
(gdb) set arc opella-target arcem
(gdb) stepi

Thx
Lee

DesignWare ARC EM Starter kit

Hello,

I am a student a the University of Chile, a part-time Profesor from my university works at Synopsys Chile and he lent me a EM Starter kit, to get familiar with it. But I have not found too much information about it, also the getting started manual that comes with the kit doesnt help much. My Profesor also has been having problems on getting the MetaWare program and license. So that why I am trying to use the GNU toolchain, but actually I dont know how to setup a program for this platform and compile it. So if you have a tutorial video or pdf? I would really appreciate if you could send it to me, so I can get started with the kit.

Best regards
René Espinoza
Electrical Engineer student
Universidad de Chile
Email: [email protected]

gcc no longer passing -I directories to assembler

In release 2014.012 directories specified with "-I" on the gcc command line were passed to the assembler, but on release 2015.06. I have created a simple test case to show this:

File test.S:

.include "test.inc"

    testMACRO

File include/test.inc:

.macro testMacro
    clri    %r16
.endm

The following command using the prebuilt 2014.12 release works:

./arc_gnu_2014.12_prebuilt_elf32_le_linux_install/bin/arc-elf32-gcc -mcpu=arcem -c -o test.o test.S -I include

while the same command using the prebuilt 2015.06 release does not work:

arc_gnu_2015.06_prebuilt_elf32_le_linux_install/bin/arc-elf32-gcc -mcpu=arcem -c -o test.o test.S -I include
/tmp/cc8BGSXu.s: Assembler messages:
/tmp/cc8BGSXu.s:5: Error: can't open test.inc for reading: No such file or directory
test.S:3: Error: bad instruction `testmacro'

If we run both with the -v option we can see that with 2014.12 the assembler command includes "-I include" but with 2015.06 it does not include any -I directories.

Is there some reason for this change? Is there some workaround to get it to work?

Thanks

Jim Straus

Error building elf32 on OS X

Hello,

Following this issue, I've ran this command on OS X:
./build-all.sh --cpu arcem --no-uclibc --no-sim --no-pdf

This is the error message in the log:

mkdir arc-elf32/libgcc
Configuring in arc-elf32/libgcc
configure: creating cache ./config.cache
checking build system type... x86_64-apple-darwin14.1.0
checking host system type... arc-unknown-elf32
checking for --enable-version-specific-runtime-libs... no
checking for a BSD-compatible install... /usr/local/bin/ginstall -c
checking for gawk... awk
checking for arc-elf32-ar... /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./binutils/ar
checking for arc-elf32-lipo... arc-elf32-lipo
checking for arc-elf32-nm... /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/nm
checking for arc-elf32-ranlib... /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./binutils/ranlib
checking for arc-elf32-strip... /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./binutils/strip-new
checking whether ln -s works... yes
checking for arc-elf32-gcc...  /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/xgcc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/ -nostdinc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/ -isystem /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/targ-include -isystem /Users/scpark/Workspace/arc_gnu/unisrc-4.8/newlib/libc/include -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/bin/ -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/lib/ -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/include -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/sys-include -L/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./ld   
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether  /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/xgcc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/ -nostdinc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/ -isystem /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/targ-include -isystem /Users/scpark/Workspace/arc_gnu/unisrc-4.8/newlib/libc/include -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/bin/ -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/lib/ -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/include -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/sys-include -L/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./ld    accepts -g... yes
checking for  /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/xgcc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/ -nostdinc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/ -isystem /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/targ-include -isystem /Users/scpark/Workspace/arc_gnu/unisrc-4.8/newlib/libc/include -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/bin/ -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/lib/ -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/include -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/sys-include -L/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./ld    option to accept ISO C89... none needed
checking how to run the C preprocessor...  /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/xgcc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./gcc/ -nostdinc -B/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/ -isystem /Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/arc-elf32/newlib/targ-include -isystem /Users/scpark/Workspace/arc_gnu/unisrc-4.8/newlib/libc/include -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/bin/ -B/Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/lib/ -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/include -isystem /Users/scpark/Workspace/arc_gnu/INSTALL/arc-elf32/sys-include -L/Users/scpark/Workspace/arc_gnu/bd-4.8-elf32/./ld    -E
checking size of double... 8
checking size of long double... 8
checking whether decimal floating point is supported... no
configure: WARNING: decimal float is not supported for this target, ignored
checking whether fixed-point is supported... no
checking whether to use setjmp/longjmp exceptions... unknown
configure: error: unable to detect exception model
make: *** [configure-target-libgcc] Error 1

Can you please clue me in on how to approach this problem? I'm pretty new to this...

Thank you!

Linker doesn't error out when only linking libraries containing incomplete object code.

In a bare metal configuration where all object files are put into libraries, then only the libraries are passed to the linker, bad object files in a library will not report any problems during link stage, and the linker still generates all the output files.

If the object files in the library have a issue that should cause the link to fail (which they do if passed directly to the linker), there will be no warnings or errors reported when passed inside a library. The same bad library passed to the Metaware linker does report errors, and fails to link.

Also, if the objects are extracted from the library, then passed to the linker, the linker will fail with errors as expected.

An easy way to reproduce this is to archive several objects into a library, excluding one known required library, then pass that library to the linker.

ARC toolchain gcc 4.8 issues

Dear Synopsys open source sw support team,
I got issues with the arc uclibc 4.8 toolchain:

  1. version 4.8.0 does not support c++ exception handling. I compile a simple throw and catch test code with 4.8.0 uclibc toolchain. the code can't caught the exception and aborted.
  2. version 4.8.4 support the exception handling, but when compile the code with optimization -O2 flag, code below isWhitespace is optimized out
    while ((idxStart < idxEnd) && (isWhitespace(mBuf.get()[idxStart])))
    {
    ++idxStart;
    }

My platform is ARC750 V1.

Thank You

"_Uncached" keyword not support

I use following definition with Metaware compiler to access hardware register
#define read_reg(addr) (*(_Uncached volatile unsigned int *)(addr))

But I can't find _Uncached keywork under GCC.
-mno-volatile-cache doesn't help because volatile are used somewhere such as

struct {
    int a;
    volatile int flag;
}

bypass cache operation will cause memory consistency problem.
How can I define cache bypass operation without insert asm code?

norm instructions in em/libc.a?

When I use the -mcpu=arcem option when using the prebuilt toolchain from:

https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain/releases/download/arc-2014.12/arc_gnu_2014.12_prebuilt_elf32_le_linux_install.tar.gz

the resultant executable includes a "norm" instruction which should not be used when the arcem is selected, at least according to:

https://github.com/foss-for-synopsys-dwc-arc-processors/gcc/wiki/compiler-options

Here is how to recreate:

echo '

include <stdio.h>

int main() {
printf("Hello, World!\n");
return 0;
}
' > hello.c
arc-elf32-gcc -mcpu=arcem hello.c -o hello.elf -Wl,-Map=hello.map
arc-elf32-objdump -d hello.elf > hello.od

Examining hello.od shows a norm instruction used in the strlen function, and looking in hello.map indicates the strlen function comes from arc-elf32/lib/em/libc.a. Doing an objdump on libc.a shows several uses of the norm instruction.

I also notice there is an arc-elf32/lib/em/norm directory - which presumably is a version of the em target libraries built with the -mnorm switch, but both libraries are using the norm instruction.

Is it possible there is a bug allowing the toolchain to use norm instructions when -mcpu=arcem is selected?

Thanks

Jim Straus

Mention tags in README.md

Readme should point users that they need to use tags to get stable release. I also suggest that along with "arc-4.8-R1", etc tags we would have a rolling tag "arc-stable" which would point to the current latest stable tag. If we will not add such a tag, then documentation should describe how to choose the latest tag (that arc_4_8-R1.2 is fresher than arc_4_8-R1, etc).

ARC toolchain : problem whit Linker

We managed a project under Eclipse of type ARCcross ELF32 target application, most of the things works as expected.
However we start have problem whit the Linker.
We get a point that when we add new source file the completion stage completed without error but the Linker failed whit the massage of “No such file or directory”

Example of the frailer:

The Linker display this message
arc-elf32-g++: error: ./Common/Drivers/System/Units/DMA/DMACofig/DMABurstConfig.o: No such file or directory

the linker command:
rc-elf32-g++ -mcpu=hs38 -mlittle-endian -mfpu=fpud_all -mmpy-option=9 -mdiv-rem -mll64 -g3 -gdwarf-2 -mabi=std -T"linker.script" -Xlinker -wrap=malloc -Xlinker -wrap=free -Wl,-Map,SVOS.map --specs=nsim.specs -o "SVOS.elf" ./Common/Drivers/System/Units/DMA/DMAConfig/DMABlockConfig.o ./Common/Drivers/System/Units/DMA/DMAConfig/DMABurstConfig.o ...

The file we add not was the file at the error message.

-After remove the new source file the problem was solved.

Do you can assist us whit this issues?

-mshift-assist no longer supported in 2015.06

The prebuilt 2015.06 toolchain no longer supports the gcc -mshift-assist option that was supported in the prebuilt 2014.12 release:

2015.06:

$ arc_gnu_2015.06_prebuilt_elf32_le_linux_install/bin/arc-elf32-gcc --target-help | grep shift
  -mbarrel-shifter            Generate instructions supported by barrel shifter

2014.12:

$ arc_gnu_2014.12_prebuilt_elf32_le_linux_install/bin/arc-elf32-gcc --target-help | grep shift
  -mbarrel-shifter            Generate instructions supported by barrel shifter
  -mshift-assist              Enable shift assist instructions for ARCv2

I could not find any references to this in the notes. Is this intentional?

Thanks

Jim Straus

ARC toolchain questions

My name is Sharon and I am working at intel R&D center in Haifa Israel.

Our goal is to run a SW testing applications on bare metal system with ARCHS38 CPU.
I downloaded and installed your windows version of GNU toolchain for ARC processors (“arc_gnu_2016.03_ide_win_install.exe”).

We managed to create a project under Eclipse of type ARCcross ELF32 target application, most of the things works as expected.
However we have some questions that we still didn’t manage to resolve:

  1. How can we “replace” the malloc function that is defined in libg.a or alternatively provide malloc with our custom address range (e.g 0x3000000 => 0x4000000)
  2. How can we control the executable start address, currently is 0x100 by default and it’s not suitable for our system memory map.
  3. Currently Eclipse creates only ELF file, but we would like to create also bin file, can it be added as an artifact?

Note: our Makefile is currently auto generated by eclipse.

Waiting forward for your reply.

Thanks,
Sharon.

ARC GNU - Possibility to write GCC plugins

Hello,

I already studied in the past the plugin interface GCC brings now since v4.5 and associated top level implementations such as e.g. MELT. To be honest writing GCC plugins remain a complex topic not everyone is capable carrying out properly. Hence my simple question here: is there a possibility somebody from the GNU community or even Synopsys in this very case may agree to help us in writing some specific plugin?

Thanks & Best regards Christophe

Problem building toolchain

I was able to change some of the scripts to be able to run this on Mac OS X, but after running for a long while it fails with this error:

./build-all.sh --no-uclibc --elf32 --install-dir /opt/toolchains
Checking out GIT trees ...
Downloading external dependencies...
Linking unified tree ...
START ELF32: Thu Dec 12 21:49:17 CST 2013
Installing in /opt/toolchains
Configuring tools ...
  finished configuring tools
Building tools ...
ERROR: tools build failed.
ERROR: arc-elf32- tool chain build failed.
./build-all.sh --no-uclibc --elf32 --install-dir /opt/toolchains  2543.22s user 1012.95s system 79% cpu 1:14:25.96 total

Where is the build logs files? here ../log-4.8/*.log??

Linux 4.7 kernel support

Dear Synopsys open source sw support team,

We are planning to port one of our platforms to Linux 4.7 kernel.
We would like to confirm that GCC toolchain for ARC platform supports/is compatible with Linux 4.7 kernel before diving into. Also if answer to this is affirmative, I would like to know latest stable toolchain release which supports this kernel.

Thanks,
Avinash Patil

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.