Giter Site home page Giter Site logo

lernalang's Introduction

LernaLang

Project Description

LernaLang is a language learning mobile application built with React Native, Firebase and OpenAI API. This application gives users a chance to practice speaking and writing in a foreign language more often. This app is particularly useful if a user doesn't have access to people that speak the language or are a little shy about looking silly practicing a foreign language.

Tech Stack & APIs

  • React Native
  • Firebase
  • OpenAI API

Getting Started

  1. Clone repo - git clone https://github.com/Kmukasa/LernaLang.git
  2. Install dependencies - npm i
  3. You'll have to use your own API keys from OpenAI
  4. Run the expo dev-client - npx expo start --dev-client

Running Storybook

  1. Comment out the code in App.js and uncomment the line // export { default } from "./.storybook";
  2. Run npm run storybook-generate stories and then run npm run storybook-watch
  3. Run the client server - npm run start
  4. Remember to uncomment the app and comment out the export line of storybook if you want to run the app normally

Demo

TBD

Personal reason for building this app

I've been "working" on a language learning application for a long time now but haven't had the resources, knowledge, or skills to complete it till now. This application came from a place of embarrassment regarding my own monolingualism and fear of looking stupid whilst learning something new. I've seen how the best way to master a skill is through repetition, so I decided to build this application to practice language learning more often and also to practice coding more. With the dawn of AI applications, I realized that building this application would be easier than ever. I am excited about the idea of using AI to democratize education in my own and others' lives. If you've found yourself on this page, know that it is with great joy and grit that this application has come to life. If you have any questions or would like to contribute, I'd be happy to chat!

License

TBD

lernalang's People

Contributors

kmukasa avatar

Stargazers

 avatar Abdallah Emad avatar Otmane Omry avatar Harshit Tiwari avatar  avatar

Watchers

 avatar  avatar

lernalang's Issues

Add NodeJS

Add NodeJs Server

  • Add server - NodeJs x Express
  • Add getTranslation endpoint
  • Add generateSpeechToText endpoint
  • Add getMessage endpoint
GET getTranslation (textToTranslate: string) -> translation: string - wrapper for google cloud
POST generateSpeechToText (fileUri: string) -> transcription: string - wrapper for whisper
GET getMessage (messages: array ) -> messageObject - wrapper for openAI

I've been going back and forth on whether to add a server but I'm hoping that it'll solve the following problems. I've been having a hard time translating text using the GCP traslate api so I'm hoping this will make it easier to do so by using the@google-cloud/translate npm package. In addition, React Native has a couple of limitations getting the appropriate audio file object to pass to the whisper api. In addition, it might be safer and easier to handle error and retry logic with a server.

Get message translation

[] Using google Palm2, fetch the translation of each message sent to and from a user.

[] Add the translation to the messageData array.

Add switch mode buttons

Problem
To switch between study mode and edit mode for the flash-cards, we need a button that toggles the modes

Solution
A clear and concise description of what you want to happen.

Task
Create sliding-control component that maintains the state of which mode we are in

Additional context
Add any other context, links or screenshots about the feature request here.

Screenshot 2024-03-16 at 12 55 03 PM Screenshot 2024-03-16 at 1 13 07 PM

Setup expo-dev-client to support dev builds

"When your project requires custom native code, a config plugin, a custom runtime version, or a reduced bundle size of the app, you can transition from using Expo Go to developing a development build."

We need to start using expo-dev-client to support file streaming to export audio

Docs: create-a-build

Add Data Storage & Retrieval Logic

Add Data Storage & Retrieval Logic

To Do:

  • Create a conversations collection
  • Add functions to store the conversation in the conversations collections in Firestore
  • Add function to retrieve all of a users' conversations
  • Add function to retrieve a given conversation given a document id
  • Add function to delete a conversation

OnLongPress get translation

When a user hold down a chatBubble the corresponding translation should be swapped in for the current text. The text should be swapped back if the user holds down on the bubble again.

Why?
I realized when first using the app that since I'm an absolute beginner, sometimes I won't understand the the question or response that Lerna has. To solve for that, a user should be able to get that translation easily

Adde useContext for chartStarted

Task(s):

  • Adde useContext for chartStarted

Description:
In the app we want a user to start a chat and for all the chat data to remain even if they navigate to different pages. We have one entry point for the Chat Options and Chat page. If a chat is started then the chat page should appear, if not then the chat options page should start. the chatStarted prop handles this logic.

Links/Notes:

Create User Profiles

User Profiles

To Do:

  • Design user profile
  • Implement user profile
  • Add sign-out capabilities on the user profile page
  • Enable the ability for a user to be deleted (If a user is deleted their conversation history is deleted as well)
  • Enable deletion of conversation history

Error Logic

Error Logic

To Do:

  • Design and implement an error page that will be displayed when errors cannot be displayed on a given page
  • Map out sources of errors e.g firebase db issues, openAI api errors, etc
  • Decide course of action in case any of the known errors occurs

Add voice to text support

**Add voice-to-text support

[x] use expo Av to record response onPress of the mic button
[] concert recording to uploadFile object
[] fetch transcript from OpenAIs whisper-1

Update:
I'm having an issue supplying the API with the appropriate file object in order to get a transcript. I either need to use a file stream package in ReactNative to get and send the file data (react-native-fs) or figure out if the expo-av stores/has a reference to the file data. So step 1 and 3 are done but step 2 is the hard part.

OpenAI speech-to-text beta

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.